Grep: Find Files That Do Not Have Multiple Different Strings

Hi all,

I'm trying to identify files that do not have matches for certain strings. FYI, these are files of DNA sequences and I'm trying to find those that are NOT sampled for any species by my group of interest (e.g., genes that are specific to that group of organisms).

I tried this code but it's actually yielding a list of files that DO match for my regexp.
Code:
for FILENAME in *.fas
do
grep -q -L ">PBAH" $FILENAME && grep -q -L ">SKOW" $FILENAME && grep -q -L ">CGRA" $FILENAME && echo $FILENAME
done

Basically I want to somehow go through and file files that do not contain ">PBAH" ">SKOW" or ">CGRA". Any assistance would be greatly appreciated!

Best,
Kevin


Similar Content



Need Help In Bash Scripting

I have two files which has exact same number of lines.
I want first line of first file should be filename of new file and content of this new file should be first line of second file.
Then second line of first file should be filename of again new file and content of this new file should be second line of second file.
then third line of first file should be filename of again new file and content of this new file should be third line of second file.
and so on...
I am trying to do it using for loop but I am not able to create two for loops.
This is what I have done
Code:
IFS=$'\n'
var=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\".*/\2/g' 1797.csv) # filenames of all files
var2=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\"\(.*\)\"$/\3/g' 1797.csv) # contents of all files
for j in $var;
do
#Here I do not know how to use $var2
done

Please help.

Output A List Of Five Books With Their Filename Titles Into One File

Dear forum of Linux,
could I output a list of five books with their filename titles into one file?
In order o output all the contents of all the files with their filenames there was: find . -type f | while read x; echo -e "\n$x";cat "$x";done > бетховен.txt

In spite of them being successively named 1Atitle... 2Atitle the two first aren't 1A 2A, but 1A ..5A (2
3 4) They actually a 1АБетховен.. 5АБетховен... It now breaks all things I hoped.

Could the task be done by head, cat or grep command? Cat has no filename parameter, head can't output the whole file and grep has a filename parameter but it's primary use is searching one line. In find I coulnd't write each file by hand ...

i've got another command awk '{ print FILENAME, $0 }' (it claims to show the filename though it shows it didn't end


Currently I blame the Linux learning curve because of google results and non-answered messages and all that after translation if a nice question directly to English. Isn't that it hard to make more help to design unixes language in that way to be really descriptive and write it as you think.
I'm deeply sorry for that grief!...(

How Can I Grep Variable?

I want to And search grep shell

but It's hard to grep variable


---------------------------------------------------------------
#!/bin/bash


if [ $# -eq 0 ]
then
echo "Ussage: phone searchfor [...searchfor]"
echo "(You didn't tell me what you want to search for )"

else

pass=0
find=""

for idx in $*
do
if [ -n "$idx" ]
then
if [ $pass -eq 0 ]
then
find=$(egrep "$idx" mydata)
pass=1

else

find=$("$find" | grep "$idx")

echo $find
fi
fi

done

if [ -z "$find" ]
then
echo "There is no such thing"
else

echo $find | awk -f display.awk

fi

fi

-----------------------------------------------------

there is one error : command not found

in find=$("$find" | grep "$idx")

how can I grep variable and store it into variable?

Error In Running Leach_test In NS-2.34

Hi all,

I am trying to simulate leach on NS2 but I've had problems with running the leach_test. I've followed all steps from installing NS2 up to installing the LEACH patch (used the latest one from exidus). Here's the error message in the leach.err I found.

Code:
couldn't read file "/mit/uAMPS/uamps.tcl": no such file or directory
    while executing
"source.orig /mit/uAMPS/uamps.tcl"
    ("uplevel" body line 1)
    invoked from within
"uplevel source.orig[list $fileName]"
    invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig[list $fileName]
..."
    (procedure "source" line 8)
    invoked from within
"source /mit/uAMPS/uamps.tcl"
    (file "tcl/mobility/leach.tcl" line 18)
    invoked from within
"source.orig tcl/mobility/leach.tcl"
    ("uplevel" body line 1)
    invoked from within
"uplevel source.orig[list $fileName]"
    invoked from within
"if [$instance_ is_http_url $fileName] {
set buffer [$instance_ read_url $fileName]
uplevel eval $buffer
} else {
uplevel source.orig[list $fileName]
..."
    (procedure "source" line 8)
    invoked from within
"source tcl/mobility/$opt(rp).tcl"
    (file "tcl/ex/wireless.tcl" line 187)

Your help are very much appreciated, thanks!

Inquiry On A Bash Script Using Sed And Grep -c

Hi Everyone,

I need some help on my bash script, I'm trying to rename a certain line in a file which might be 1 or more.

IDXCOUNT=`grep -c 'index .* on ' $FILENAME`;

for n in $(eval echo {1..$IDXCOUNT});
do
timestamp=$(date +"%s");
echo "Renaming index idx_$timestamp..";
if [ $n -eq 1 ]; then
sed -i "0,/^index [^)]* on /s/index idx_$timestamp on /" $FILENAME;

My problem is, if the sed target is 2 or more it will generate an idx_$timestamp that will cause duplicate index value when the script finish. My goal is to have the script recognize that when there are multiple index in the file, it will try to rename it one by one.

I'm new to bash so I'm not sure if I explained my issue well but I will appreciate any help!!

Thanks!

Why Does Grep Return "No Such File Or Directory"?

I copied the following from my linux console.

grep -lr "SMTP" *.ini
grep: *.ini: No such file or directory

I wanted to search recursively under current location in files with extenstion .ini
Actually there are files that contain "SMTP" under this directory. But I got the above error message.
What is wrong? I am using centos 6.

Thanks,
3rock

Script To Find The File Creation Time Is More Than Ten Minutes

Hi All,

Im trying to create a script which checks the creation time of a file and if it is more than ten minutes, send out an alert.

For ex : Creation date of file is 10:30 AM
current time is 10:45 AM

Then send an alert/ message.

This is the script i wrote below :

Code:
#!/bin/ksh

filename="/apps/log/file.txt"

if [ -f "${filename}" ]
then
        createTime=`ls -lad "${filename}" | awk '{print $8}'`
        echo "$createTime"
        currentTime=`date '+%M'`
        echo "$currentTime"
        DIFF=$(( $currentTime - $createTime )) 
        echo "$DIFF"

else

        exit 1

fi

I am getting syntax error on the subraction when i try to run this script . I can understand that creation time and current time is in different format thats y this error throws, but i dont know how to rectify it .

I need to find out if the file creation time is more than ten minutes.

Please help me in achieving this output.

Using Xargs And Grep In Find Command

I've been using this a lot:

find <directory to start search at> -name "<files to search in>" -type f | xargs grep "<string to search for>"

e.g.

find /usr/include -name "*.h" -type f | xargs grep "#define UINT"

now what if I wanted to output the results to a file?

Diffing The Line Numbers

hi guys

i am trying to find the "size" of a "block" of data in LARGE data files, the example below test_data.txt is very simplified. by "size" i mean the difference in line numbers of a block, and the "size" will be constant throughout the file so

1234 6.600000 4321
1234 8.500000 4321
1234 1.800000 4321
1234 2.300000 4321
1234 8.500000 4321
1234 2.800000 4321

if i define a block as whenever i find 8.500000 in the second column, then in the example the the block size would be 3 becasue 8.500000 occurs on the 5th line and on the 2nd. right now i am using

Code:
 grep -n "8.500000" test_data.txt | cut -f1 -d:

and/or

Code:
 awk '/8.500000/ {print FNR}' test_data.txt

obviously i don't remeber how to tag text as code?

btw, the grep command is much much faster

both of these commands give an entire list (long list of number for files greater than a gig) of line numbers which i then have to subtract one from another to come up with 3 in the example. not that i'm opposed to doing math, but i would think awk or grep should be able to do this for me

ideas?

tabby

How I Can Print A Specific Range Of Nubers Form A File.

hello,

i am trying to make a table from some files. i used this to record how much "RD_" field i have in my file. Quote:
grep -o 'RD_' $f|grep -c 'RD_'
forexample i got 5 "RD_" fields now i want to print 5 number of fields from another file starting from 2nd field. i did it mannully like Quote:
awk 'NR==1{print"{"$2","$3","$4","$5","$6","0.0000",""0.0000""}"","}' $file
i want to make it work together and a bit auto matic like PHP Code:
awk 'NR==1{print"{"$2"to "$5"," apend zeros to make it total 7 fields"}"","}' $file 


your coments would be apreciated
thanks alot