Hello,
I have a flat file where I expect to have 5 values delimited by 4 commas ",' :
agf,sdya,geg,fgd,gdfgr
but sometimes I have the following:
agf,sdya,geg,fgd
agf,sdya,geg
agf,sdya
agf
For those lines, I wanna append commas at the end of the lines in order to always have a total of 4 commas:
agf,sdya,geg,fgd,
agf,sdya,geg,,
agf,sdya,,,
agf,,,,
The following awk command already gives me the count number of "," for each line:
awk -F\, '{print NF-1}' "MyFile"
But I am not sure where to go from there.
Basically I want to do the following;
If CommaCount For CurrentLine != 4 Then Append(4 - "number of commas found')CommasToLine
Else CheckNextLine.
Thanks for your help !
I want to append lines after a match in a file.
##file name is ssl.conf
##match is this
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
</Directory>
after above line i need to append these lines
<Directory "/">
SSLRenegBufferSize 26215000
</Directory>
so final results should be like this
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
</Directory>
<Directory "/">
SSLRenegBufferSize 26215000
</Directory>
######Thank You in Advance
Hello
I have a text file which has blocks like
Code:
dir1/dir2/dir3/name_run_number1:
line1_run_number1_part1
line2_run_number1_part2
line3_run_number1_part3...
Each block is separated with a blank line and there is the ":" in the "header" of each one while each block carries the same "number1" after "run_" suffix
What I want to do is for each block, extract the "number1" as shown in the first line and then for the lines below count from 1-20 and give a message if a "partX" line is missing. Any bash or python would be fine
Thanks
Hello everyone,
Although it seems easy, I've been stuck with this problem for a moment now and I can't figure out a way to get it done.
My problem is the following:
I have a file where each line is a sequence of IP addresses, example :
Line 1: 10.0.01 10.0.0.2
Line 2 : 10.0.0.5 10.0.0.1 10.0.0.2
...
What I'd like to do, is to remove lines that are completely matched in other lines. In the previous example, "Line 1" would be deleted as it is contained in "Line 2".
So far, I've worked with python and set() objects to get the job done but I've got more than 100K lines and sets lookups are becoming time consuming as the program goes :/
Thanks for you help
I am trying to process some .csv files with Linux as follows:
Some fields have data with newline characters embedded, like so:
"Bob Smith
531 Pennsylvania Avenue
Washington, DC"
(I verified the existence of the " via Wordpad. The file is too large to easily edit in Wordpad to get all the data for each row on a single line).
what linux command would I use on the files to get the data in each cell on one line?
I have tried:
1. awk -v RS="" '{gsub (/\n/,"")}1' file > newfile
but the cell data was still being read in as if "531 Pennsylvania Avenue" was a brand new row in the CSV file.
2. Command 1 followed by awk -v RS="" '{gsub (/\r/,"")}1' newfile > finalFile
but that resulted in all of the data in the file being put onto a single line.
3. awk -v RS="" '{gsub (/\r\n/,"")}1' file > newFile
But that result was the same as attempt number 2.
How can I preprocess the file so that:
"Bob Smith
531 Pennsylvania Avenue
Washington, DC"
is read as a single field on a single line as part of the row it should be associated with, like
"Bob Smith 531 Pennsylvania Avenue Washington, DC"
I have two files which has exact same number of lines.
I want first line of first file should be filename of new file and content of this new file should be first line of second file.
Then second line of first file should be filename of again new file and content of this new file should be second line of second file.
then third line of first file should be filename of again new file and content of this new file should be third line of second file.
and so on...
I am trying to do it using for loop but I am not able to create two for loops.
This is what I have done
Code:
IFS=$'\n'
var=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\".*/\2/g' 1797.csv) # filenames of all files
var2=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\"\(.*\)\"$/\3/g' 1797.csv) # contents of all files
for j in $var;
do
#Here I do not know how to use $var2
done
Please help.
I have a little bash script that cats out a file and tells me if there is a line
where the 11th column has more than 6 characters in it.
It emails me where there is a bad line in a file - bead meaning that it will break a
donwstream process.
anyhow when i get the email saying that there is a bad file i just log in to the pc via
vpn and the I sed out the lines from the file that I get in the email. The bad lines are
always in danny.csv not danny1.csv
It has been the same lines killing the downstream process for a few weeks, so i put the "sed -i's" into
the script and it does it automagically.
[CODE]
for i in danny.csv danny1.csv
do
cat /come/and/play/with/$i | perl -ne 'print if length((split /,/)[10]) > 6' | mail -s "danny.csv bad line" casper@casperr.com
done
#it would be nice to find a perl change the file in place
sed -i '/D,642,0642,UBF,EVL,,M,,S,S,FOREVER,213,213,/d' /come/and/play/with/us/danny.csv
sed -i '/D,642,0642,UBF,EVL,,M,,S,S,QSP-U=C,4,4,/d' /come/and/play/with/us/danny.csv
[CODE]
However when a new line gets put into this file, I am going to have to log in and take out the line.
SO I have been trying to write a perl one liner that will edit the file in place, like sed, and make a
backup of the file. I just need a perl one liner that will delete any line where the 11th columns has more
than 6 characters in it.
[CODE]
perl -p -i.bak -e 's/\,\w{7}\,//g - which does not work.
[CODE]
I tried something like this:
[CODE]
perl -nle 'print if /\,\w{7}\,/' /come/and/play/with/us/danny.csv
[CODE]
but that does not catch the QSP-U=C and it catches more lines than just the
FOREVER. for a solutinog I need to focus on the the 11th column.
when i am running delay.awk file on AODV.tr and LAR.tr For AODV.tr , it is giving me the o/p value as some delay
but for LAR , its showing me the follwing error
gawk: e2edelay.awk:93: (FILENAME=larscen5.tr FNR=36516130) fatal: division by zero attempted
I was unable to recify the error. could you please help me.
thank you so much
this is the code of awk file which i am using for calculation of delay. kindly help
# http://205.196.121.184/fnufnnc17mwg/...c/e2edelay.awk
# http://mohittahiliani.blogspot.dk/20...s-for-ns2.html
# ===================================================================
# AWK Script for calculating:
# => Average End-to-End Delay.
# ===================================================================
BEGIN {
seqno = -1;
# droppedPackets = 0;
# receivedPackets = 0;
count = 0;
}
{
if($4 == "AGT" && $1 == "s" && seqno < $6) {
seqno = $6;
}
# else if(($4 == "AGT") && ($1 == "r")) {
# receivedPackets++;
# } else if ($1 == "D" && $7 == "tcp" && $8 > 512){
# droppedPackets++;
# }
#end-to-end delay
if($4 == "AGT" && $1 == "s") {
start_time[$6] = $2;
} else if(($7 == "cbr") && ($1 == "r")) {
end_time[$6] = $2;
} else if($1 == "D" && $7 == "cbr") {
end_time[$6] = -1;
}
}
END {
for(i=0; i<=seqno; i++) {
if(end_time[i] > 0) {
delay[i] = end_time[i] - start_time[i];
count++;
}
else
{
delay[i] = -1;
}
}
for(i=0; i<count; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
print "\n";
# print "GeneratedPackets = " seqno+1;
# print "ReceivedPackets = " receivedPackets;
# print "Packet Delivery Ratio = " receivedPackets/(seqno+1)*100
#"%";
# print "Total Dropped Packets = " droppedPackets;
print "Average End-to-End Delay = " n_to_n_delay * 1000 " ms";
print "\n";
}
Hi all ,
I have a requirement where I have a file. Contents of the file are :
#comments
VAR="abg"
RES=123
#comments
IC6790ABG="https://www.abc.com"
IC5678-vg="https://www.bhy.com"
IC-gy_567:78="https://www.gyt.com"
#comments
The variable names can not have characters like - , : so
in this file I have to find words starting with IC and replace characters like - ,:
I want to change only the variable name , not the whole line.
I have used SED command
sed -i '/^IC/s/[^0-9 a-z A-Z _]*//g' file
when I am using this command , it is replacing the whole line
output becomes :
#comments
VAR="abg"
RES=123
#comments
IC6790ABGhttpswwwabccom
IC5678vghttpswwwbhycom
ICgy56778httpswwwgytcom
#comments
But I want the output like this :
#comments
VAR="abg"
RES=123
#comments
IC6790ABG="https://www.abc.com"
IC5678vg="https://www.bhy.com"
ICgy_56778="https://www.gyt.com"
#comments
How can I get the desired output , thanks for your help in advance .
Currently taking my first linux course in college and am stuck on a project...details he https://aacc.instructure.com/courses...item_id=614603
So far this is the code I have come up with but get an error on the 42nd line where it says done. I am not an expert by any means but would like to not struggle and finish this project. Below is my code...any input welcome.
#!/bin/bash
#blinker
#creates a bi-stable process that displays ON and OFF
touch .running
clear
while [ -f .running ]
do
count=0
while [ -f .running ]
do
count=$((count+1))
if [ $count -gt 4 ]
then
break
fi
echo "Green"
sleep 4
done
count=0
while [ -f .running ]
do
count=$((count+1))
if [ $count -gt 4 ]
then
break
fi
echo "red"
sleep 4
count=0
while [ -f .running ]
do
count=$((count+1))
if [ $count -gt 2 ]
then
break
fi
echo "green"
sleep 2
count=0
while [ -f .running ]
do
done
done
I have a log file with a header (which I can skip with awk), and a footer, which I need to find a way to remove. The goal is to extract the middle lines from a file. Specifically, there is a header (1 line) and a footer (1 line).
The only way I can figure out how to do this is if I already know how many lines are in the file to begin with. For example, if the file looks like this:
line 1 (header)
line 2 (interesting line)
line 3 (interesting line)
line 4 (footer)
I just want to extract the middle "interesting lines" without the header/footer lines.
I can't use grep to remove the header/footer, because I don't know what those lines will contain, only that they exist and are exactly 1 line each. In general, I don't know how many lines are in the file.