Reversing A Wrong Command

Hi,

I created a user and when i logged in it took me to the $>
I was reading online and i stumbled with this command
cp -r /etc/skel/.* etc/skel/*
I was stupid enough to copy it
Now our home folder is full of files and my boss is not happy.

Is there a reverse to this command? or Can I just delete the files been copied?

Thanks


Similar Content



Ssh Public Key Fingerprint

First time post so I hope it's not too long winded!

I've just installed Centos v7.1 and created an additional user.

First putty session I logged in with root and got the public fingerprint message and clicked Yes to accept.

However I noted when I logged in with the user other I did not get the prompt and my home directory didn't have a .ssh directory.

OK, so I created a .ssh (chmod 700) directory within /etc/skel and created a new user. Logged in with that account and still no prompt, although I do now have a .ssh directory generated.

I've tried deleting the known_hosts files in the root's .ssh directory and restarting the sshd daemon but it's not working.

How do I get it to prompt with original public key again?

Thanks for reading.

How To Delete Number Of Files

I'm trying to figure out if find could do this. I have a folder with 1000 files. I want to delete 150 files on that folder regardless of timestamp and filename. Is there a tool, command or option on find that could do this, please let me know.

Combining mtime or ctime to find is not advisable since it will not count the files or even if there are matches, I would still need to sum up the files until I reach 150 files.

Any suggestions?

Is It Food Have Find Command Running In Cron Every Day Once To Delete Older Files

I need to clean-up some folder. I have a cron job which uses find command to find and delete 25 days older file.

Code:
find $FOLDER_PATH/$FOLDER -depth -regex  ^$FOLDER_PATH/$FOLDER/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ -type d -mtime +25  -exec ls -ld {} \;   >> /tmp/deleted_folders.log

When this running it occupies 1-3% of cpu. And it may take longer time based on the folder size.

Is it ok to have find command running as cron job ?

Exporting Log Data To A File That Matches Stdout

hey guys,

Let's say I want to find out which log files have related ntp information in them. I use cat and grep to search through the files in /var/log and then export that to a file. this is the command...

# cat /var/log/* | grep ntp > /home/log.txt

The file created from this command will not include the directories the log entries are apart of. Why not? For example, if you do this same command without exporting to the /home/log.txt file it will show you in stdout which directory each log entry is in. Hope I'm making sense here. My question is, is there a clever way to export to a file in a way so that the file created is structured exactly like the stdout of the command below?

# cat /var/log/* | grep ntp

Can Anybody Explain How Copy.com Works To Me?

I'm running Xubuntu and it was a challenge just getting Copy.com on there. (I installed the desktop app on both of my computers.) Now that I have it though, I don't really know how to use it.

I know this is kind of more a Copy.com question, but I don't know anything about Copy.com (besides having it--lol) and besides, I like you LQ guys.

So yeah, I installed the desktop app for Copy.com on both of my computers. I know that if I put something in the Copy folder that will be available to both computers.

But how Copy does the backing up I don't know.

When I change a file or folder do I have to plop that into the Copy folder every time or does Copy somehow update the file or folder in the Copy folder automatically? (It doesn't seem to.)

Okay, when I, say, take the Documents folder from one computer and plop it into the Copy folder that's that. Then I take the Documents folder from the other computer and plop that into the Copy folder, then all the files from both folders will be in the Copy folder (and the Copy cloud), right?

Now I just removed a couple of files from a folder and copied and pasted the folder into the Copy folder. But then when I looked at the Copy folder the files I'd deleted were still there. What's the process? How does it work?

I mean, how does this work as a way of backing things up AND organizing things? To me it seems like a decent way of throwing stuff into the Copy folder (and cloud), but how is that different than Google Drive? I mean, that's not really a backup, is it? It's like a flash drive in the cloud.

And when I combined the same folders (with the same titles anyway, but they each had different files within them) from the two computers I'd expected each folder on each computer to have all the same files that were cumulatively on both. Instead, they're the same. And the cumulative is only on the Copy folder.

I like the notion of just throwing the folders and files into the Copy folder. It's much quicker than Google Drive. But the backing up feature eludes me and the syncing feature makes me fearful that I'll lose data or that the files will become hopelessly less organized.

Thanks.

Understanding Configuration Files Better

Hey, I'm aware that /etc/ stores config files and in my home directory I also have dot files as well as a .config folder.

And I'm told not to edit /etc/ but create a copy in my home directory to preserve original files. Is it as simple as creating the full path the same as /etc/ and editing it in home folder?

Ideally this is how I hope it works, because I don't want to edit /etc/ and end up with a bunch of custom, non default files.

Copy Encrypted Compact Flash Issues With Dd In Ubuntu

Hello, I have a system that uses a compact flash with a windows os and some other files on it, also somewhere is some sort of encrypted licensing information. I have several of these machines and can use the cf from the others just fine in this machine. But when I take one of those cards and try to copy it with dd, somehow the machine can tell the difference. It's nothing illegal, it's just too old to buy the replacement. Someone has told me they copied successfully in linux with the dd command, but mine aren't working. I also can't tell the brand or type of cf since all the labels have been removed. All i know is that it's a 256mb card. So is there any other options besides dd, or is there a deeper level of dd that i can use to copy this info. I'm using something like:

sudo dd if=/dev/sde of=/home/folder/cfcard

then to copy from my hard drive to the blank cf:

sudo dd if=/home/folder/cfcard of=/dev/sde

I'm using a usb cf reader, and when i have my finished cf everything looks good. Even the machine can read it, it just gives me an error that the cf card isn't a licensed or corrupted.

Usr/bin

given that im in the home directory, what cli command should i typoe to get access to my documents folder which i can clearly see in the gui but cannot seem to access.

i thought logically it should be /home/files/documents or perhaps /home/user/files/documents

where am i being a dweeb!!!!!....... again

Setfacl Help

I can't believe I wrote a looong message and it logged me out when I tried to submit it.

So anyway, in short lines:

- I have a network of sites where all sites share same "images" folder
- I have created /home/_images/entities and symlinked it from all websites
- It works great with Apache, when I open /images/ on any of the sites I get list of images and can view them

The problem is suPHP which changes process ID of the PHP script to the file owner ID, so when I load site1.com, all scripts are executed as user1 (and files/folders created with those scripts belong to user1:user1). When I load site2.com, all scripts are executed as user2 (and files/folders created with those scripts belong to user2:user2). All these users do NOT belong to the same group, and I wouldn't like to change that as it is cPanel/WHM server so I'm afraid I'll screw something up if I change (primary?) group of all users.

Therefore I need to set it up in such way that all newly created folders and files under /home/_images/entities (owned by root) have read/write permissions for everyone.

Here's the command I used:

Code:
setfacl -Rdm o::rwx /home/_images/entities

To check it:
Code:
root@server1 [~]# getfacl /home/_images/entities/
getfacl: Removing leading '/' from absolute path names
# file: home/_images/entities/
# owner: root
# group: root
user::rwx
group::rwx
other::rwx
default:user::rwx
default:group::rwx
default:other::rwx

This looks fine, however when I try upload an image via site1.com it looks like this:

Code:
root@server1 [/home/_images/entities]# ls -l
total 24
drwxrwxrwx+ 5 root    root    4096 Jan 14 06:25 ./
drwxrwxrwx  5 root    root    4096 Jan 12 13:08 ../
drwxrwxr-x+ 3 user1   user1   4096 Jan 14 06:25 1/

And in folder "1" is the image (and thumbs folder):

Code:
root@server1 [/home/_images/entities/1]# ls -l
total 236
drwxrwxr-x+ 3 user1   user1     4096 Jan 14 06:25 ./
drwxrwxrwx+ 5 root    root      4096 Jan 14 06:25 ../
-rw-rw-rw-  1 user1   user1   225569 Jan 14 06:25 689048f221ab7c556f4d482a9d92b2d6.jpg
drwxrwxr-x+ 2 user1   user1   4096 Jan 14 06:25 thumbs/

My questions:

1) Why newly created folders do not have "write" permissions for everyone else [not user and/or group]? If I upload first image from site1.com, then I can't upload other images from any other site, while all sites can display them.

2) What is the + at the end of permissions list? (drwxrwxr-x+)

3) Why newly created files have only "rw" permissions for user, group AND everyone else, and not execute permissions? I don't actually need execute flag set here, but from my command you can see I've set "o::rwx" so it should be there (or not?)

Actually the real problem is #1 - other users can't write to this folder so users can't upload images from other sites nor other sites can create (missing) thumbnails.

Changing User Name Slight Hangup

hi-

Today I switched my linux username and changed to /home/newname. I am doing this because it is not secure to broadcast your username to the world.

At this link:

http://askubuntu.com/questions/34074...ge-my-username

I am wondering if there is a way to do it a little better for me.

Quote:
You can either keep a symlink for backward compatibility, e g ln -s /home/newname /home/oldname or you can change the file contents with sed -i.bak 's/*oldname*/*newname*/g' *list of files* It creates a backup for each file with a .bak extension.
I tried to do a symlink.

It is not working exactly the way I had hoped. I may not understand it.

When I do the command ls -l I had to make an alias command with awk to parse out the user when i display it. That isn't a big deal but I noticed the oldname and newname are in the printout before filtering it with awk.

Also, my old home directory (the one that matches my /home/oldname is not deleted).

I can log in and the desktop looks fine. I indeed have a new /home/newname.

At one point can I delete the /home/oldname.

Is it okay that ls -l is picking up both usernames (new and old) in separate columns?

I wanted to bounce this off a more experienced user to see if there are some minor adjustments I can do to improve not having 2 home directories and if it is okay for ls -l to display both the old and new user name.

thanks!

mtdew3q