Hi Everyone, I have multiple csv files(>100). They are rain-gauge stations files for precipitation measurement. In these files, the numbers of stations are not equal(i.e. there are missing stations). I want only the stations that are present in all the files. The files have unique station id in column #3. I want to ask if this is possible in Linux?
It may be something along: for h in *.cvs; do sed '?????' $h > rippe_$h && mv rippe_$h $h.xls ; done
I have several of csv files, I want to remove the first row (header) and first column as well. This is my code, but it overwrites the original files with no content inside:
for x in *.csv;
do
sed '1d' $x | cut -d, -f 1 > "$x";
done
Help would be highly appreciated.
i have a few score of files (>50) in fasta format. these work fine in linux os
but i have to send these to a collegue who uses windows. and these files don't open properly in notepad or wordpad. executing save as to windows format does the trick
but i don't want to manually convert all of them
is ther a way i can accomplish conversion of multiple files and saving them in a format of my choosing using say terminal
Does anyone know a way to copy two files to multiple computers? I'm thinking of scp as the flavor of linux we're using does not include rdist.
I've read that scp can't copy multiple files, however maybe some scripting genius has figured out a way. Running two scripts (one for each file), is perfectly ok!
If anyone care to post very clear examples (i'm definitely not a programmer...) of scripts, etc, that would be great.
Thanks in advance to all those who can help!
I am currently running a system simulation on multiple files.
I have a computer algorithm written in perl to run "system" simulations for all the files I need
What I am trying to do is put multiple files into one file, only problem is that its not doing exactly what I need it do
Example:
I am "cat txt0.txt txt1.txt txt2.txt txt3.txt > allfiles.txt"
I need it to read as
txt0.txt
txt1.txt
txt2.txt
txt3.txt
Instead its taking all the files and taking the information within each txt file and putting them all together. Info that looks like this
fdfasdfqwdefdfefdkfkkkkkkkkkkkkkkkfsdfasdxfewqfe..........
all clustered together
you get the picture ?
I am really confused how to get this to work, there are over 100 files that need to go into a single file.
That way when I run it through the perl algorithm I created, I can do it in one shot.
Hello Folks.
I'm searching for a easy way to rename multiple files from CLI but didn't find any easy way for me so I'm reaching out to you guys for help.
This is what I want to do (from CLII or script). I want to move files with a sequence number on the name of the files (msg0000, msg0001, msg0002 and so on) to let's say msg0066, msg0067 and so on. Each of this file name has two other files (msg0000.wav, msg0000.WAV and msg0000.txt).
The idea is to move them from one directory to another and following a sequence in the file names, is there a way I can do this pain free?
Any help on this matter will be greatly appreciates and I'm talking about over 100 files I need to move following the sequence of the receiving directory.
Thanks!
I'm trying to figure out if find could do this. I have a folder with 1000 files. I want to delete 150 files on that folder regardless of timestamp and filename. Is there a tool, command or option on find that could do this, please let me know.
Combining mtime or ctime to find is not advisable since it will not count the files or even if there are matches, I would still need to sum up the files until I reach 150 files.
Any suggestions?
Hello,
I am new to Linux images (pxe, livecd). I would like to add files to a linux image, like something under etc or var, and have the files be available on the client.
The server I am working on was already configured with a pxe image, and only 2 files are present under the pxe client folder: initrd and vmlinuz. So I am wondering if either of these files contain the dirs /etc, /var, etc..., and how I could add files to them.
To give some background, I have done the same thing in Windows. An image in Windows is typically either boot.img or install.img. You can mount either of these to a folder using the Windows SDK tool imagex.exe /mountrw <img file> 1 <mount point>. From here you can add/remove/modify any files you want. Then commit the changes with imagex.exe /unmount <mountpoint> /commit.
Can someone provide insight to the linux image creation process, which of the files (initrd, vmlinuz/vmlinux, etc...) contain what for the client boot, or something similar to the Windows Image editing process?
I know I'm asking for a range of info, but pointers to any material to help my understanding will be greatly appreciated.
Thanks,
Jon
I am trying to put back together a big atr file from some smaller tar files that I created several years ago. The issue is that in order to tar this large file, I must put each file back using the command
tar -xMf cd-1.tar
Prepare volume #2 for 'cd-1.tar' and hit return:n cd-2.tar
Prepare volume #3 for 'cd-2.tar' and hit return:n cd-3.tar
and so forth.
I have fourteen files cd-1.tar through cd-15.tar. The cd-9.tar files is missing and I assume that it is gone. Now when I type the commands in I get the following:
Code:
-linux tarfile]$ tar -xMf cd-1.tar
Prepare volume #2 for `cd-1.tar' and hit return: n cd-2.tar
Prepare volume #3 for `cd-2.tar' and hit return: n cd-3.tar
Prepare volume #4 for `cd-3.tar' and hit return: n cd-4.tar
Prepare volume #5 for `cd-4.tar' and hit return: n cd-5.tar
Prepare volume #6 for `cd-5.tar' and hit return: n cd-6.tar
Prepare volume #7 for `cd-6.tar' and hit return: n cd-7.tar
Prepare volume #8 for `cd-7.tar' and hit return: n cd-8.tar
Prepare volume #9 for `cd-8.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return:
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
As you can see I do not have cd-9.tar. That stops the untarring cold. However, I have cd-10.tar,cd-11.tar,cd-12.tar,cd-13.tar,cd-14.tar,cd-15.tar. Now I may have these files, but they cannot be put back in the main file because cd-9.tar is missing and everything must be put in sequentially.
Is there a way to complete this sequence of steps and add all fourteen files to the files bigbackup leaving out cd-9.tar? That means that the bigbackup file will be incomplete, but that is better than no file or having bigbackup missing six files on the back end.
Any help appreciated.
Thanks in advance.
Respectfully,
Newport_j
I recently crashed my LMDE system, and trying to retrieve something from the disaster, booted up the failsafe OS and copied a number of files onto a USB stick using the 'cp'command.
'ls -alrt /media/dougb' showed that the files were present and correct on the stick.
However when I attached the stick to a working LMDE PC there appeared to be nothing on it, whether I looked at the stick in the desktop file manager or by listing in a terminal.
I've tried changing the permissions and/or ownership of the files (via the broken OS) without success.
Why aren't the files visible? What can I do about it?
Hello Everyone! I'm somewhat new to linux, and getting my feet wet by building my first linux server.
So what i have is an application that moves/sorts files. Another program that catalogs them.
The problem is that each app uses it's own user. So my question is if there is any way that files owned by prog1user can be read by prog2user?
I have tried doing a chmod -R 755 Directory and that has allowed the second program to see the files, but I'm guessing this has certain security risks (although I'm not so worried about the files in this directory).
Anyways I was wondering if there was a proper way to do this? OS is debian wheezy.
Cheers!