hi,
I am newbie in Linux shell scripting.Can anybody help me to check the presence of file identified by variable in Shell scripting?
For example: I am reading the content of a file using while command as below:
"while read -r line
do
code block
done < file_name"
Now in this case every line in file gets stored in the variable 'line' one by one.Problem here is every line in the file is nothing but the file_path of another file say xyz.txt and I am checking presence of this xyz.txt file using below command:
if [-f $line]
as 'line' is the variable which stores file path of xyz.txt but it is not working. It is unable to check the presence of this xyz.txt file as i am addressing it with the variable 'line'.
Please help me.Thanks in advance.
Hi!
I wrote a script shell about changing IP of the httpd.conf from text file and saving a new httpd.conf file for all IPs in the text file. The code is below, Im using mremote on windows (RHEL Release 5.7), and getting this error:
./script1.sh: line 19: syntax error: unexpected end of file
Please help me about that error. Thanks!
(ps: line 19 is the last line in the code => done < "$filename" )
#!/bin/bash
DATE='date +%Y%m%d_%H:%M:%S'
NAME=httpd_ip_change
EXTENSION=conf
max=10
filename="$1"
while read -r line
for ((i=2; i<=$max; ++i ))
do
touch $NAME$i.$EXTENSION.$DATE
sed "s/<VirtualHost 10.11.92.81:80>/$line/g" ./httpd.conf > $NAME$i.$EXTENSION.$DATE
done < "$filename"
Ok yes this is a homework assignment BUT I am NOT looking to have the answers given to me. I am in the 6th week of my first Linux class ever and we are in our few weeks of beginning scripting. I have some ideas of what to do or where to start but not many and no one to bounce any ideas off...we are using UNIX Bash shell so any others I have no clue. The scenario is that I need a script that searches all my users home directories for bad words. I need the script to report to the screen certain info like username and word found and path. It should ask a user if it is good or bad and if bad be put into a file of list of bad file names, if good remove from list and no longer flagged by the script. What I have so far is wanting to somehow do a loop. I do know that if I do a grep -r -e kill -e steal /home/* I get a list of what I need. I also know that the list is separated by delimiters which I can pipe to get a variable for the things I need. I also know that I can put it to a file with a > filename.txt
What I have no clue is how to start a loop that would do this...
for each line in filename.txt
UNAME=...
LOC=...
TXT=...
echo "Username: $UNAME, Line with bad word found: $TXT, and Path and file name: $LOC. Is this a BAD file? (Y)"
Read YORN
if ["$YORN" = "Y" ]; then
>> (line of text from grep) badfiles.txt
fi
Next or whatever goes there...sorry if this is crazy I just really need some direction. I am trying to learn so please don't give me the answer...that will do nothing for me and I will not be able to explain the code I came up with.
Hi all,
I am trying to simulate leach on NS2 but I've had problems with running the leach_test. I've followed all steps from installing NS2 up to installing the LEACH patch (used the latest one from exidus). Here's the error message in the leach.err I found.
Code:
couldn't read file "/mit/uAMPS/uamps.tcl": no such file or directory
while executing
"source.orig /mit/uAMPS/uamps.tcl"
("uplevel" body line 1)
invoked from within
"uplevel source.orig[list $fileName]"
invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig[list $fileName]
..."
(procedure "source" line 8)
invoked from within
"source /mit/uAMPS/uamps.tcl"
(file "tcl/mobility/leach.tcl" line 18)
invoked from within
"source.orig tcl/mobility/leach.tcl"
("uplevel" body line 1)
invoked from within
"uplevel source.orig[list $fileName]"
invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig[list $fileName]
..."
(procedure "source" line 8)
invoked from within
"source tcl/mobility/$opt(rp).tcl"
(file "tcl/ex/wireless.tcl" line 187)
Your help are very much appreciated, thanks!
I am trying to process some .csv files with Linux as follows:
Some fields have data with newline characters embedded, like so:
"Bob Smith
531 Pennsylvania Avenue
Washington, DC"
(I verified the existence of the " via Wordpad. The file is too large to easily edit in Wordpad to get all the data for each row on a single line).
what linux command would I use on the files to get the data in each cell on one line?
I have tried:
1. awk -v RS="" '{gsub (/\n/,"")}1' file > newfile
but the cell data was still being read in as if "531 Pennsylvania Avenue" was a brand new row in the CSV file.
2. Command 1 followed by awk -v RS="" '{gsub (/\r/,"")}1' newfile > finalFile
but that resulted in all of the data in the file being put onto a single line.
3. awk -v RS="" '{gsub (/\r\n/,"")}1' file > newFile
But that result was the same as attempt number 2.
How can I preprocess the file so that:
"Bob Smith
531 Pennsylvania Avenue
Washington, DC"
is read as a single field on a single line as part of the row it should be associated with, like
"Bob Smith 531 Pennsylvania Avenue Washington, DC"
Hello everyone,
Although it seems easy, I've been stuck with this problem for a moment now and I can't figure out a way to get it done.
My problem is the following:
I have a file where each line is a sequence of IP addresses, example :
Line 1: 10.0.01 10.0.0.2
Line 2 : 10.0.0.5 10.0.0.1 10.0.0.2
...
What I'd like to do, is to remove lines that are completely matched in other lines. In the previous example, "Line 1" would be deleted as it is contained in "Line 2".
So far, I've worked with python and set() objects to get the job done but I've got more than 100K lines and sets lookups are becoming time consuming as the program goes :/
Thanks for you help
couldn't read file "mit/uAMPS/ns-leach.tcl": no such file or directory
while executing
"source.orig mit/uAMPS/ns-leach.tcl"
("uplevel" body line 1)
invoked from within
"uplevel source.orig[list $fileName]"
invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig[list $fileName]
..."
(procedure "source" line 8)
invoked from within
"source mit/uAMPS/ns-leach.tcl"
(file "leach.tcl" line 7)
I have a little bash script that cats out a file and tells me if there is a line
where the 11th column has more than 6 characters in it.
It emails me where there is a bad line in a file - bead meaning that it will break a
donwstream process.
anyhow when i get the email saying that there is a bad file i just log in to the pc via
vpn and the I sed out the lines from the file that I get in the email. The bad lines are
always in danny.csv not danny1.csv
It has been the same lines killing the downstream process for a few weeks, so i put the "sed -i's" into
the script and it does it automagically.
[CODE]
for i in danny.csv danny1.csv
do
cat /come/and/play/with/$i | perl -ne 'print if length((split /,/)[10]) > 6' | mail -s "danny.csv bad line" casper@casperr.com
done
#it would be nice to find a perl change the file in place
sed -i '/D,642,0642,UBF,EVL,,M,,S,S,FOREVER,213,213,/d' /come/and/play/with/us/danny.csv
sed -i '/D,642,0642,UBF,EVL,,M,,S,S,QSP-U=C,4,4,/d' /come/and/play/with/us/danny.csv
[CODE]
However when a new line gets put into this file, I am going to have to log in and take out the line.
SO I have been trying to write a perl one liner that will edit the file in place, like sed, and make a
backup of the file. I just need a perl one liner that will delete any line where the 11th columns has more
than 6 characters in it.
[CODE]
perl -p -i.bak -e 's/\,\w{7}\,//g - which does not work.
[CODE]
I tried something like this:
[CODE]
perl -nle 'print if /\,\w{7}\,/' /come/and/play/with/us/danny.csv
[CODE]
but that does not catch the QSP-U=C and it catches more lines than just the
FOREVER. for a solutinog I need to focus on the the 11th column.
Hello
I have a text file which has blocks like
Code:
dir1/dir2/dir3/name_run_number1:
line1_run_number1_part1
line2_run_number1_part2
line3_run_number1_part3...
Each block is separated with a blank line and there is the ":" in the "header" of each one while each block carries the same "number1" after "run_" suffix
What I want to do is for each block, extract the "number1" as shown in the first line and then for the lines below count from 1-20 and give a message if a "partX" line is missing. Any bash or python would be fine
Thanks
I have a log file with a header (which I can skip with awk), and a footer, which I need to find a way to remove. The goal is to extract the middle lines from a file. Specifically, there is a header (1 line) and a footer (1 line).
The only way I can figure out how to do this is if I already know how many lines are in the file to begin with. For example, if the file looks like this:
line 1 (header)
line 2 (interesting line)
line 3 (interesting line)
line 4 (footer)
I just want to extract the middle "interesting lines" without the header/footer lines.
I can't use grep to remove the header/footer, because I don't know what those lines will contain, only that they exist and are exactly 1 line each. In general, I don't know how many lines are in the file.