How To Delete Header (first Row) And First Column From Multiple CSV Files

I have several of csv files, I want to remove the first row (header) and first column as well. This is my code, but it overwrites the original files with no content inside:

for x in *.csv;
do
sed '1d' $x | cut -d, -f 1 > "$x";

done

Help would be highly appreciated.


Similar Content



How To Filter Multiple Csv Files According To A Complete Rows

Hi Everyone, I have multiple csv files(>100). They are rain-gauge stations files for precipitation measurement. In these files, the numbers of stations are not equal(i.e. there are missing stations). I want only the stations that are present in all the files. The files have unique station id in column #3. I want to ask if this is possible in Linux?

It may be something along: for h in *.cvs; do sed '?????' $h > rippe_$h && mv rippe_$h $h.xls ; done

Grep: Find Files That Do Not Have Multiple Different Strings

Hi all,

I'm trying to identify files that do not have matches for certain strings. FYI, these are files of DNA sequences and I'm trying to find those that are NOT sampled for any species by my group of interest (e.g., genes that are specific to that group of organisms).

I tried this code but it's actually yielding a list of files that DO match for my regexp.
Code:
for FILENAME in *.fas
do
grep -q -L ">PBAH" $FILENAME && grep -q -L ">SKOW" $FILENAME && grep -q -L ">CGRA" $FILENAME && echo $FILENAME
done

Basically I want to somehow go through and file files that do not contain ">PBAH" ">SKOW" or ">CGRA". Any assistance would be greatly appreciated!

Best,
Kevin

Command Line To Remove Column 10 From A .csv File

Hello. I am new to Linux and am looking for a solution to remove column 10 from a .csv data file as this column is causing me problems. Any help would be appreciated.
Thank you!

Remove Column From .CSV Using AWK Command

Hi,

I am trying to remove the Column from .CSV using Awk Command. Below is the input

Name,Age,Description,Location
Gopal,32,Tech Lead,"Ramnagar, Naintal"
Gaurav,24,"Linux Admin, Expert","Noda"

I want column 1,2,4 and i am using below mention command

awk -F, ' BEGIN { OFS="," }
{
print $1,$2,$4
}' new.csv > new-done.csv


but instead of getting Gaurav,24,Noda

i am getting

Name,Age,Location
Gopal,32,"Ramnagar
Gaurav,24, Expert"

Having Problem While Inserting New Enteries In Csv File

Hi Experts,

I am trying to make new enteries in a csv file in new column but am not able to do so.Please help for the same.

Requirement:

There are multiple directories & within those directories i have sub-directories and i want to build a csv file with 2 columns of Directories mapped to their sub-directories. Can you please help me with this. I tried the following code:

Code:
#!/bin/bash

homeDir="$HOME"



ls ~/Parent/ | cut -c1-9 > ~/test_111.csv

while read Child

do

Entry="$(ls $homeDir/Parent/$Child/ABC/XYZ/DEF/PQR)"

echo $Entry

for (( c=1; c<=5; c++ ))

do



sed -i ci"$Entry" test_222.csv

done


done < test_111.csv

Basically i want two columns of csv file , First column should have Child name & Second cloumn should have Sub-Directory name inside PQR Directory.

Any help will be useful on this.

Thanks in Advance!

Best Regards,
Vijay Bhatia

Need Help Cat Multiple Files To One File

I am currently running a system simulation on multiple files.
I have a computer algorithm written in perl to run "system" simulations for all the files I need

What I am trying to do is put multiple files into one file, only problem is that its not doing exactly what I need it do

Example:

I am "cat txt0.txt txt1.txt txt2.txt txt3.txt > allfiles.txt"

I need it to read as

txt0.txt
txt1.txt
txt2.txt
txt3.txt

Instead its taking all the files and taking the information within each txt file and putting them all together. Info that looks like this


fdfasdfqwdefdfefdkfkkkkkkkkkkkkkkkfsdfasdxfewqfe..........

all clustered together

you get the picture ?

I am really confused how to get this to work, there are over 100 files that need to go into a single file.
That way when I run it through the perl algorithm I created, I can do it in one shot.

Print Column With A String Using Awk Or Grep

Hi All,

Using grep command with a defined string, you will get the row. Can I get the column instead of row? For example,

the file content is as below:

AA BB
123 456
789 ABE

execute 'grep ABE file' will give you "789 ABE". Is there any way to get the column:
BB
456
ABE
?

Thank you very much.

How To Delete Number Of Files

I'm trying to figure out if find could do this. I have a folder with 1000 files. I want to delete 150 files on that folder regardless of timestamp and filename. Is there a tool, command or option on find that could do this, please let me know.

Combining mtime or ctime to find is not advisable since it will not count the files or even if there are matches, I would still need to sum up the files until I reach 150 files.

Any suggestions?

Splitting A Huge Textfile By Regular Expressions

Hi!

I have a fasta file with biological DNA sequences.
Fasta files are build like this:
>This_is_a_FASTA_header
TTTATATATAGACGATGACGATGACA
>The_next_sequence_begins
GGGCACAGTAGCAGA
>And_another
TGCGAGAGGTAGTAGAT

In my case all the header lines (starting with ">") have one 360 indices starting after the ">:
>001_blabla
....
>360_blabla

I want to split my big combined fasta file into 360 single files with sequences sharing the same index.

Thank you very much!

Need Help In Bash Scripting

I have two files which has exact same number of lines.
I want first line of first file should be filename of new file and content of this new file should be first line of second file.
Then second line of first file should be filename of again new file and content of this new file should be second line of second file.
then third line of first file should be filename of again new file and content of this new file should be third line of second file.
and so on...
I am trying to do it using for loop but I am not able to create two for loops.
This is what I have done
Code:
IFS=$'\n'
var=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\".*/\2/g' 1797.csv) # filenames of all files
var2=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\"\(.*\)\"$/\3/g' 1797.csv) # contents of all files
for j in $var;
do
#Here I do not know how to use $var2
done

Please help.