Dd Command For Backing Up Linux OS..am I Doing This Right?

hey guys, I understand how dd works. Ive been researching on it and I see that
if="" is the source of what I want to copy
of="" is that target that I want to send the back up to

so, what i am trying to do it copy my hard drive from saturn server: /dev/sda1 over SSH to my backup server skyline.

so [saturn harddrive /dev/sda1] -----backup---> [skyline:/backup/directory]


this is the command I am using
sudo ssh willc86@skyline "dd if=/dev/sda1 " | dd of=/backup/saturn.img

however, it is saying failed to open /dev/sda1/ : permission denied and I am runnign as root. any idea why?

thanks a bunch guys!


Similar Content



Need Help Understanding Luckybackup

gold finger was kind enough to share this with me a while ago:

Quote:
Do backups to either another HDD, partition, or a USB stick (if big enough to hold your data). Can use program to make an initial backup of /home/gregory; then use it to periodically update that backup by having it sync between your installed Xubuntu /home/gregory and the backup copy. The sync function will just copy over things that are new or changed, rather than copying everything all over again.

Assuming your Xubuntu filesystem is Ext4, example of doing initial backup would be something like this:

* Spare USB with large partition formatted as Ext4 and labeled "BACKUPS"
* Open luckybackup and choose "Backup" function
* "Source" = /home/gregory
* "Destination" = /media/gregory/BACKUPS (might be under /media/BACKUPS)
* Check box to not create new directories (it will just do exact copy of source)


After initial backup, either make a new task for syncing, or modify the backup task to turn it into a syncing task instead. Then use that periodically to update the backed-up /home/gregory.
I've downloaded Luckybackup and have been experimenting with it but I'm still not sure the best way to go about using it as a backup. Like in gold finger's advice, why would I check the box to not create new directories? It seems to me doing it without checking the box re-creates things just the way they are on my computer. When I check the box it just takes everything out of the folders. Seems confusing (and unnecessasry). And I have a really hard time finding the errors after a run and when I do find them I do I don't know what they mean. And so if I backup the source destination it makes an exact copy on my destination drive (with folders if I don't check the box, without if I do). Then if I do that as an ongoing thing, I will be backing up all my data with each run (which I'm assuming would be much more time consuming), whereas if I choose 'syncrhonize source and destination' it will only backup the changes in my source and usb drive (which would be my destination drive)?

Is that the idea?

And I noticed that Lucky did not want to transfer things with colons in them. Googling around somebody said that problem would be taken care of by switching to ext 3 or ext 4 for formatting the destination drive (as gold finger suggested). Is this a good idea? (I've always felt comfortabel with FAT because if I needed to plug my flash drive into Microsoft it would work (as well as with Linux).)

So the first time I use Lucky I choose "backup source inside destination" and of course the source and destination. Should I check the "Do NOT create extra directory" box? (Again, that seems off as 95% of what I'll be backing up is in folders.)

Then after I've done that, I choose the snyc option?

A lot of stuff. I know. Thanks.

PS. As a slight complication I have the data (basically the "home" folder) of my two computers (work and home) synced via Copy.com.

Rsync, Reliable "copy And Paste" Type Of Backup In Case Things Break?

What I did in windows was create images of my drive and restore them.

in linux I am running

Code:
rsync -aAXv --exclude={"/home/*","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /* /path/to/backup/folder

and this creates a folder for me with all my files, and apparently saves meta data like permissions and paths...

Since I'm using arch and things break sometimes,I'm booted into a CLI with errors and cannot figure my way out since I'm a noob... would I be able to just delete my entire root and replace it with the rsync backup without a problem?

What Is The Rsync Flag To Ignore Permissions

I am using Rsync to backup files to a another machine, the users on my fileserver do not exist on the backup server so Rsync throws errors about the permissions. It copies the files fine but I want to get rid of the errors and have Rsync ignore the permissions when backing up.

/backup is a mounted ftp directory

Below is the current command and output:
Code:
root@Fileserver:~# rsync -av --delete /shared/fileshare/ /backup/backup
building file list ... done
created directory /backup/backup
./
manager/
manager/chironfs.txt
manager/cronman.txt
manager/curlftpfs.txt
manager/curlman.txt
manager/getnetaddress.txt
manager/grepman.txt
manager/rsyncman.txt
manager/tarman.txt
public/
user1/
user10/
user2/
user3/
user4/
user5/
user6/
user7/
user8/
user9/
rsync: chown "/backup/backup/manager/.chironfs.txt.c6MbJ7" failed: Operation not                  permitted (1)
rsync: chown "/backup/backup/manager/.cronman.txt.hdBG4P" failed: Operation not                  permitted (1)
rsync: chown "/backup/backup/manager/.curlftpfs.txt.t1sG4L" failed: Operation no                 t permitted (1)
rsync: chown "/backup/backup/manager/.curlman.txt.6oWPoW" failed: Operation not                  permitted (1)
rsync: chown "/backup/backup/manager/.getnetaddress.txt.V8z8Kk" failed: Operatio                 n not permitted (1)
rsync: chown "/backup/backup/manager/.grepman.txt.REh4WW" failed: Operation not                  permitted (1)
rsync: chown "/backup/backup/manager/.rsyncman.txt.ho8VNM" failed: Operation not                  permitted (1)
rsync: chown "/backup/backup/manager/.tarman.txt.BkcmeS" failed: Operation not p                 ermitted (1)

sent 211115 bytes  received 274 bytes  6710.76 bytes/sec
total size is 210263  speedup is 0.99
rsync error: some files could not be transferred (code 23) at main.c(977) [sende                 r=2.6.9]
root@Fileserver:~#

I tried the flag to adding the no flag to -p but it still didn't work, see below:
Code:
root@Fileserver:~# rsync -av --no-p --delete /shared/fileshare/ /backup/backup
building file list ... done
./
manager/
manager/chironfs.txt
manager/cronman.txt
manager/curlftpfs.txt
manager/curlman.txt
manager/getnetaddress.txt
manager/grepman.txt
manager/rsyncman.txt
manager/tarman.txt
public/
user1/
user10/
user2/
user3/
user4/
user5/
user6/
user7/
user8/
user9/
rsync: chown "/backup/backup/manager/.chironfs.txt.6Q3eP2" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.cronman.txt.FC8Orx" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.curlftpfs.txt.mlVSN9" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.curlman.txt.vlJ4b1" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.getnetaddress.txt.LXmft0" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.grepman.txt.SVuaye" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.rsyncman.txt.KTNYqA" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.tarman.txt.zcU90c" failed: Operation not permitted (1)

sent 211115 bytes  received 274 bytes  7686.87 bytes/sec
total size is 210263  speedup is 0.99
rsync error: some files could not be transferred (code 23) at main.c(977) [sender=2.6.9]

Using Find And Pipe To Tar

am trying to use tar in combination with find, the goal is to all files in /export that have been modified in the last 24 hours (back up purposes), then tar them so I can untar on the backup server, updating just the modified files.

Perhaps there is a better way, however, I have tried using cpio but the problem come in when I copy to the NAS drive (NTFS) I lose all my owner/group and permissions. I have found that if I tar the files, then copy them to the NAS, when I untar on the server, it will retain the owner/group and permissions.

So… here is what I have tried:

First, I use the find command to see what files should be in the tar archive.
Code:
/export $ find . -depth -mtime 0 -print
./file4
./file3
.

Ok, that looks right, now I will try to pipe that in to tar
Code:
/export $ find . -depth -mtime 0 -print0 | tar -czvf backup.tar.gz --null -T - 
./file4
./file3
./
./share/
./share/pdf/
./share/pdf/penny-2014-09-03-11:41.30.pdf
./share/pdf/penny-2014-09-03-14:25.17.pdf
./share/pdf/penny-2014-09-03-11:24.36.pdf
./share/pdf/penny-2014-09-03-14:37.12.pdf
tar: ./share/pdf/.directory: Cannot open: Permission denied
./share/pdf/penny-2014-09-02-14:52.06.pdf
./share/pdf/penny-2014-09-03-12:18.43.pdf
tar: ./share/PDF: Cannot open: Permission denied
./share/file3
tar: ./share/.directory: Cannot open: Permission denied
./dir1/
./dir1/file1
./file4
./file2
./file3
tar: ./.directory: Cannot open: Permission denied
./list
tar: Exiting with failure status due to previous errors

It seems that it is trying to tar all the files in that directory. When I view the files in backup.tar.gz all of the files from /export are in there not just the modified ones

How To Select Newest "dated" Directory?

Hi,
I have a /backup directory which contains 5 days worth of backups:
2015-03-12_03-01-07
2015-03-11_03-01-07
2015-03-10_03-01-07
2015-03-09_03-01-07
2015-03-13_03-01-07

I need to copy the content of the NEWEST backup. Ideally, I'd like to find newest backup and assign it to a variable for later use.
I know I can Code:
ls -lt

to sort directories, but how do I pick the first one and store it into a variable?

Thanks,

Rsync Copy Permission Denied

hi experts

I am rsyncing a user's home dir across the NFS and the local PC, but when it tries to copy over the hidden files it fails with permission denied. Both dir are owned by the proper user and I am root when I execute the script, so I am not sure what went wrong here.
For example: this is the content and permissions of the source:

-rw------- 1 user test 115 Nov 14 11:28 .bash_history

and here is my error:

rsync: send_files failed to open "/home/user/.bash_history": Permission denied (13)

Thanks

Backups And External Drives

Hello everyone,

I recently had an issue where I lost my whole backup server due to an electrical overload causing my server to literally explode and fried all 4 of my terabyte drives.... needless to say, I have no more backups because of this, and everywhere I read about backups said that setting up a raid array would allow me to keep good backups.... boy did I learn this lesson the hard way in needing to have some sort of external backup option, which brings me to this post and my questions:

I'm using Ubuntu 14.04 LTS server on an older Dell Poweredge 600sc, and I was thinking of using WD Passport 1Tb external drives to be used as my "offsite" backup option. I don't have a lot of data, and my current backup schedule is only a weekly backup, so thinking that if I have two of these passport drives so that I can have one drive offsite and one attached to the server, and rotate them every 4 weeks so as not to loose all my data.

Here's my question: Ideally, I would love to just be able to unplug the current drive, plug in the new drive and have everything work. However, I don't see this actually working, but if there's a way to do this, that would be totally awesome.... ;-)

So, realistically, I know I will have to unmount the one drive, unplug it, then plug in the new drive and mount it on the system. Is there a way to mount this to the same mount point automatically so that I don't have to rewrite my backup script each time I swap drives out so that the backups go to the same mount point? Or will the UUID's get messed up each time I do this?

Hopefully this makes sense and an easy solution can be found to accomodate this idea.....

Thanks again for all your help. This site is awesome for newbies such as myself........

Mikey

Backing Up With Dd

hi guys,

i want to back up my centos server that has a few virtual machines on it...i read that some people were saying dd is a very good way to do this. but a lot of people dont cause of fear of the command line.

i would like to write a script that backus up the entire hard drive to a ext usb hard drive.

i have a couple of questions though.

1) am i better off using a cloning software like acronis or storage craft.

2) can dd also restore a system to a bootable state or is it really just a backup.

3) can you still use the system while it is backing up?

thanks...

GRUB2 Scripts To Use Labels For Friendly Names

I am using linux mint and the grub menu gets configured automatically using scripts in /etc/grub.d. The menuentry that gets created is something like Code:
"linux mint (on /dev/sda1)"

. I use external drives sometimes and also have linux on my harddrive which I also switch between computers. It gets confusing when it says /dev/sda2 when it means something else. It boots fine because that actual boot command uses uuid. How can I change the text of the (script generated) description to also use partition labels or uuid (or the first few chars) just so I know which install will actually boot. like this: Code:
"Linux Mint (OFFICESSD)"
"Linux Mint (HOMEHDD)"
"Ubuntu (SANDISK)"
"Ubuntu (IMATION)"

I realise (maybe its the best way) I can change the "GRUB_TITLE=Linux Mint 17 Cinnamon 64-bit" in /etc/linuxmint/info but would rather a smoother way.

Permission Denied When Trying To Execute An Application On A SD Card

Hi,
I made some little applications with qt creator and i wanted to run them on an embedded linux board (linux 2.6.24). To transfer the files i use a SD card. If i move the applications to "/bin" after having mounted the SD card and then "chmod a+x" them then i have no problem running them.
But if i mount the SD card and try to run the applications directly in the folder where i mount it i have an error : "Permission denied". Also when using the command "ls" i notice that if i keep the files in the mounting folder and try to "chmod a+x" them, the modification don't happen. They stay "greyed" and don't go "green". (I don't know if this color code is a standart for linux terminal but maybe this could help you understand the problem).
When mounting the SD card i use the command :
Code:
mount -t vfat /dev/mmcblk0p1 /mnt/SD

So the files are located in /mnt/SD.

Am i missing something or is it not possible to run something like that ?

EDIT :
I tried "mount -t vfat -o umask=0000 /dev/mmcblk0p1 /mnt/SD" to chnage how i am mounting the SD card.

Still "Permission Denied".

With "ls -l" i can see that the permissions are staying :
-rw-rw-rw-

Even if i try something like "chmod 777".

It seems to be a problem related to the fact that the sd card is formated as fat32 but it must stay like that.