Its plentiful documentation fails to explain how to use it. After googling all over the world and hunting down examples, it turns out they don't work as the man page indicates if you alter them in the slightest. In fact I know what I want to find, where it is, how to describe it; but the problem is getting find to list that file or those files and moreover making sense of its horrid documentation. It lists too many things. It ignores -prune. It either does nothing or too much. Its a horrible, horrible program, and you are expected to make up the difference by experimentation.
Its so complicated as to make it more worthwhile just to list every file and filter them out with other utilities. Honestly, why go through so much trouble to learn about a command that is as obfuscated as a heartpump?
Sure its great for listing all the files and directories in one subdir, but if you want to exclude multiple dirs or files forget it. Pipe it through grep.
hey guys,
Let's say I want to find out which log files have related ntp information in them. I use cat and grep to search through the files in /var/log and then export that to a file. this is the command...
# cat /var/log/* | grep ntp > /home/log.txt
The file created from this command will not include the directories the log entries are apart of. Why not? For example, if you do this same command without exporting to the /home/log.txt file it will show you in stdout which directory each log entry is in. Hope I'm making sense here. My question is, is there a clever way to export to a file in a way so that the file created is structured exactly like the stdout of the command below?
# cat /var/log/* | grep ntp
Don't know how but now when I use ls I get list of files and directories on their own lines. I want the default behavior I had before. When I cd into another directory I have default behavior - horizontal list of files and folders.
If there is one Windows XP feature that I greatly miss in Mint, it is the Search Companion.
I have been struggling with 'grep' in order to create something suitable but with limited success. Take the following problem:-
I wish to interrogate the folder home/dell/Documents/Domestic/Recipes, searching for all files containing the word "mushroom" or "mushrooms", ignoring case. (I can manage the latter. )
Each individual file search should terminate at the first instance of a match and move to the next file. (Recursive, yeh?) Only the file names need to be listed and the output should be paged to allow for easier reading of long lists.
Several different types of file may be involved, including .doc, .odt, .txt .pdf, .htm and .rtf. It would be nice to include all of them in one command. (Wild card behaviour in grep is not entirely predictable - at least not for me.) Running a separate grep command for each different file type would be tedious.
A significant difficulty is that, if grep fails with a syntax, or run-time error, it generally reports the fact but it also has a habit of producing no output, perhaps not returning to the command prompt, whilst sitting inviting the user to decide what to do next. What makes this particularly frustrating is that some file types might not be amenable to a grep search. Text in .txt files and, it would appear, .doc files appears to be searchable but I suspect that .odt files might be more problematic. The snag in such circumstances is trying to interpret grep's response. Does a null return mean that no match was found or that the file format cannot be successfully interrogated? Such failure might not be apparent if the associated file names are simply excluded from the output list.
Apart from grep, is there any other software that would do the job? Sadly LibreOffice Writer seems to be lacking in this area.
Hi all,
I'm trying to identify files that do not have matches for certain strings. FYI, these are files of DNA sequences and I'm trying to find those that are NOT sampled for any species by my group of interest (e.g., genes that are specific to that group of organisms).
I tried this code but it's actually yielding a list of files that DO match for my regexp.
Code:
for FILENAME in *.fas
do
grep -q -L ">PBAH" $FILENAME && grep -q -L ">SKOW" $FILENAME && grep -q -L ">CGRA" $FILENAME && echo $FILENAME
done
Basically I want to somehow go through and file files that do not contain ">PBAH" ">SKOW" or ">CGRA". Any assistance would be greatly appreciated!
Best,
Kevin
They say there are no secrets in Linux. I am finding that learning about Linux is becoming a life long experience. I have just started using a Debian distribution that is behind the Raspberry pi. My first problem was that the display would go to sleep after about 15 mins when not used. I wanted to turn this feature off. That is I wanted the display to be on all the time.
After some web searching I came across a way to do this. It involved a file in the root area called "lightdm" which stands for light display manager (I think).
Then under this as a file called the lightdm.conf file. Just one line in the config file gets modified. Now doing this via the monkey see monkey do method works. But trying to find out how this works and exactly what the cryptic commands do, ends up being a frustrating endless search. I tried to find the source code for lightdm buts its documentation is certainly not for beginners.
I tried finding out what a "greeter" was once again huge amounts of time spent trying to make sense of endless terminology.
It seems that nowadays trying to learn about the details of any software system is just so hard. Like try and find good documentation on drivers.
Don't get me wrong I love the fact that at least Linux is open source but I wish there was an easy way to learn about it.
I'm trying to figure out if find could do this. I have a folder with 1000 files. I want to delete 150 files on that folder regardless of timestamp and filename. Is there a tool, command or option on find that could do this, please let me know.
Combining mtime or ctime to find is not advisable since it will not count the files or even if there are matches, I would still need to sum up the files until I reach 150 files.
Any suggestions?
Hello all,
As title says.... whats the best way of getting linux to do another command after the first one?
e.g. Say I type "find . -iname "*.dll" this displays a list of all the dlls in a folder/subdirectory. ... then i wanted to copy these files, or du them, or other commands.
The three things i am aware of is:
Pipe |
awk
xargs
Which one is the most useful/standard/best practice to use?
Thanks
Dear forum of Linux,
could I output a list of five books with their filename titles into one file?
In order o output all the contents of all the files with their filenames there was: find . -type f | while read x; echo -e "\n$x";cat "$x";done > бетховен.txt
In spite of them being successively named 1Atitle... 2Atitle the two first aren't 1A 2A, but 1A ..5A (2
3 4) They actually a 1АБетховен.. 5АБетховен... It now breaks all things I hoped.
Could the task be done by head, cat or grep command? Cat has no filename parameter, head can't output the whole file and grep has a filename parameter but it's primary use is searching one line. In find I coulnd't write each file by hand ...
i've got another command awk '{ print FILENAME, $0 }' (it claims to show the filename though it shows it didn't end
Currently I blame the Linux learning curve because of google results and non-answered messages and all that after translation if a nice question directly to English. Isn't that it hard to make more help to design unixes language in that way to be really descriptive and write it as you think.
I'm deeply sorry for that grief!...(
Hi there. I have Ubuntu 14.04 installed. Actually I have been doing a lot of work in this OS for about a year. The thing I cannot still comprehend is how to find files I installed. In this particular case I need glut.h for g++ compiler. So I go here, do this command;
Code:
sudo apt-get install freeglut3-dev
And find out that I already have the newest version (which I have suspected since I recall installing it).
So, the next step is to find the glut.h file and reference it with #include command. I cannot find it anywhere. This website says it has to be he
Code:
/usr/lib/x86_64-linux-gnu/libglut*
Why's the asterisk? Is it a footnote or part of the code?
I don't seem to have /usr/ directory. I cannot find it anywhere.
How does Ubuntu directory work?
Thanks, - A.
hi guys
i am trying to find the "size" of a "block" of data in LARGE data files, the example below test_data.txt is very simplified. by "size" i mean the difference in line numbers of a block, and the "size" will be constant throughout the file so
1234 6.600000 4321
1234 8.500000 4321
1234 1.800000 4321
1234 2.300000 4321
1234 8.500000 4321
1234 2.800000 4321
if i define a block as whenever i find 8.500000 in the second column, then in the example the the block size would be 3 becasue 8.500000 occurs on the 5th line and on the 2nd. right now i am using
Code:
grep -n "8.500000" test_data.txt | cut -f1 -d:
and/or
Code:
awk '/8.500000/ {print FNR}' test_data.txt
obviously i don't remeber how to tag text as code?
btw, the grep command is much much faster
both of these commands give an entire list (long list of number for files greater than a gig) of line numbers which i then have to subtract one from another to come up with 3 in the example. not that i'm opposed to doing math, but i would think awk or grep should be able to do this for me
ideas?
tabby