Syntax Help To Delete Some Files From A Directory On Centos

Hi

I have a folder :

Code:
/usr/local/src/myfolder

and i have there a few folders and files ...

Now i want to delete from this folder the files named:

Code:
file1.txt 
image.jpg
info.html
another.txt

and leave all the rest folders and files...

How is the correct syntax for this?

If it is possible to not use cd /usr/local/src/myfolder and then rm -r .... so i can run it from everywhere ....

Thanks


Similar Content



Hidden Folders And Files Become Viewable In Home Directory

Hi guys,
.
For no apparent actions from me, hidden folders and files show
in my /user/home directory, they are as follows:-

folders:
.adobe .cache .config .cups .filezilla .gimp-2.8 .gnupg .gphoto .gstreamer-0.10 .icedtea .java .local .macromedia .mozilla .pki .thumbnails

Files:
.bash_history .bashrc .esd_auth .ICEauthority

In my / directory
File: ./readahead

Seeking help to verify the above folder and files are not from a harmful source or application?

If they do not post any thread to the system, how can I conceal
these folders and files, so that they don't show up any more in
my home and / directory ?

Many thanks.

Find 30 Days Old And Delete Prints Error Msg File Not Found After Deleting It

I have a shell script to find folders which are 25 days older and delete it, and put the deleted folder details into log file like this

Code:
 find /ahome/xxx/$FOLDER -type d -mtime +25  -exec ls -ld {} \;  -exec rm -rf {} \;  >> mylogfile.log

after running this command it deletes the folder and logs the folder deleted. But also print error msg
Code:
find: /ahome/prksh/dir/test: No such file or directory

How to suppress the error msg

Tar Is Not Extracting All Files

Hi,

I have an tar.gz file which when right click extract here on my ubuntu computer is extracted without issue. I have all the folders and all the files (links and one executable) in them.
But when i transfert this tar.gz to another system and use the command :
Code:
tar zxvf nameofthefile.tar.gz

I see in the terminal things like that :
Code:
usr/bin/unzip/lzop        
usr/bin/xargsbin/mountpoin
usr/bin/telnet
bin/pipe_prog
usr/bin/lzma        
bin
usr/bin/sort

which mean that the files must be extracted but when using "ls" intot he folders they are empty. All the links are not here and only the executable is extracted.

When i copy back the archive from the new system to my ubuntu system and extract it all the files are here without issue. Why is it only creating the folders and not extracting the links ?

How To Delete Number Of Files

I'm trying to figure out if find could do this. I have a folder with 1000 files. I want to delete 150 files on that folder regardless of timestamp and filename. Is there a tool, command or option on find that could do this, please let me know.

Combining mtime or ctime to find is not advisable since it will not count the files or even if there are matches, I would still need to sum up the files until I reach 150 files.

Any suggestions?

Is It Food Have Find Command Running In Cron Every Day Once To Delete Older Files

I need to clean-up some folder. I have a cron job which uses find command to find and delete 25 days older file.

Code:
find $FOLDER_PATH/$FOLDER -depth -regex  ^$FOLDER_PATH/$FOLDER/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ -type d -mtime +25  -exec ls -ld {} \;   >> /tmp/deleted_folders.log

When this running it occupies 1-3% of cpu. And it may take longer time based on the folder size.

Is it ok to have find command running as cron job ?

Can Anybody Explain How Copy.com Works To Me?

I'm running Xubuntu and it was a challenge just getting Copy.com on there. (I installed the desktop app on both of my computers.) Now that I have it though, I don't really know how to use it.

I know this is kind of more a Copy.com question, but I don't know anything about Copy.com (besides having it--lol) and besides, I like you LQ guys.

So yeah, I installed the desktop app for Copy.com on both of my computers. I know that if I put something in the Copy folder that will be available to both computers.

But how Copy does the backing up I don't know.

When I change a file or folder do I have to plop that into the Copy folder every time or does Copy somehow update the file or folder in the Copy folder automatically? (It doesn't seem to.)

Okay, when I, say, take the Documents folder from one computer and plop it into the Copy folder that's that. Then I take the Documents folder from the other computer and plop that into the Copy folder, then all the files from both folders will be in the Copy folder (and the Copy cloud), right?

Now I just removed a couple of files from a folder and copied and pasted the folder into the Copy folder. But then when I looked at the Copy folder the files I'd deleted were still there. What's the process? How does it work?

I mean, how does this work as a way of backing things up AND organizing things? To me it seems like a decent way of throwing stuff into the Copy folder (and cloud), but how is that different than Google Drive? I mean, that's not really a backup, is it? It's like a flash drive in the cloud.

And when I combined the same folders (with the same titles anyway, but they each had different files within them) from the two computers I'd expected each folder on each computer to have all the same files that were cumulatively on both. Instead, they're the same. And the cumulative is only on the Copy folder.

I like the notion of just throwing the folders and files into the Copy folder. It's much quicker than Google Drive. But the backing up feature eludes me and the syncing feature makes me fearful that I'll lose data or that the files will become hopelessly less organized.

Thanks.

Setfacl Help

I can't believe I wrote a looong message and it logged me out when I tried to submit it.

So anyway, in short lines:

- I have a network of sites where all sites share same "images" folder
- I have created /home/_images/entities and symlinked it from all websites
- It works great with Apache, when I open /images/ on any of the sites I get list of images and can view them

The problem is suPHP which changes process ID of the PHP script to the file owner ID, so when I load site1.com, all scripts are executed as user1 (and files/folders created with those scripts belong to user1:user1). When I load site2.com, all scripts are executed as user2 (and files/folders created with those scripts belong to user2:user2). All these users do NOT belong to the same group, and I wouldn't like to change that as it is cPanel/WHM server so I'm afraid I'll screw something up if I change (primary?) group of all users.

Therefore I need to set it up in such way that all newly created folders and files under /home/_images/entities (owned by root) have read/write permissions for everyone.

Here's the command I used:

Code:
setfacl -Rdm o::rwx /home/_images/entities

To check it:
Code:
root@server1 [~]# getfacl /home/_images/entities/
getfacl: Removing leading '/' from absolute path names
# file: home/_images/entities/
# owner: root
# group: root
user::rwx
group::rwx
other::rwx
default:user::rwx
default:group::rwx
default:other::rwx

This looks fine, however when I try upload an image via site1.com it looks like this:

Code:
root@server1 [/home/_images/entities]# ls -l
total 24
drwxrwxrwx+ 5 root    root    4096 Jan 14 06:25 ./
drwxrwxrwx  5 root    root    4096 Jan 12 13:08 ../
drwxrwxr-x+ 3 user1   user1   4096 Jan 14 06:25 1/

And in folder "1" is the image (and thumbs folder):

Code:
root@server1 [/home/_images/entities/1]# ls -l
total 236
drwxrwxr-x+ 3 user1   user1     4096 Jan 14 06:25 ./
drwxrwxrwx+ 5 root    root      4096 Jan 14 06:25 ../
-rw-rw-rw-  1 user1   user1   225569 Jan 14 06:25 689048f221ab7c556f4d482a9d92b2d6.jpg
drwxrwxr-x+ 2 user1   user1   4096 Jan 14 06:25 thumbs/

My questions:

1) Why newly created folders do not have "write" permissions for everyone else [not user and/or group]? If I upload first image from site1.com, then I can't upload other images from any other site, while all sites can display them.

2) What is the + at the end of permissions list? (drwxrwxr-x+)

3) Why newly created files have only "rw" permissions for user, group AND everyone else, and not execute permissions? I don't actually need execute flag set here, but from my command you can see I've set "o::rwx" so it should be there (or not?)

Actually the real problem is #1 - other users can't write to this folder so users can't upload images from other sites nor other sites can create (missing) thumbnails.

Repo Syntax Issue Rhel/centos

Hello again,

i need to understand once for all the syntax of files.repo.

When i need a repository i create under /etc/yum.repos.d/ the repository file

goofy.repo

with this syntax

[repository_name]
[goofy]
baseurl=ftp:///mydirectory/myrepodir
gpgcheck=0

---
Myrepodir contains all the *.rpm files i need

when i exit and save i get anytime this error

Code:
ftp:///mydirectory/myrepodir/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - ""
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: repobase. Please verify its path and try again

What am i wrong for? How can i just say to my rhel/centos system
"Ehy, for rpm look inside /mydirectory/myrepo" ?

Thanks

Shell Script Problem

Hi,

I'm wondering if anyone can help me make a script that searches through a specific folder (in this case /tmp ) for files with a given permissions (755) and then delete all the other files with different permissions?

The correct permission should be, as mentioned 755, and those are the files that should be kept (not deleted).
All other files in this folder with different permissions should be deleted.

Thanks!

Command Manual Working But Not On Cron

Hi

When i run this command manually on Centos 6.6 it works:

Code:
/usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf

but as a cron job it doesn't as i can see a folder with files there from Mar 28:

Code:
55 5 * * * /usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf

And here are the logs from cron that it is executing this at the correct time :

Code:
Mar 30 05:55:01 server CROND[9526]: (root) CMD (/usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf)

Any ideas why?

Thanks