gold finger was kind enough to share this with me a while ago:
Quote:
Do backups to either another HDD, partition, or a USB stick (if big enough to hold your data). Can use program to make an initial backup of /home/gregory; then use it to periodically update that backup by having it sync between your installed Xubuntu /home/gregory and the backup copy. The sync function will just copy over things that are new or changed, rather than copying everything all over again.
Assuming your Xubuntu filesystem is Ext4, example of doing initial backup would be something like this:
* Spare USB with large partition formatted as Ext4 and labeled "BACKUPS"
* Open luckybackup and choose "Backup" function
* "Source" = /home/gregory
* "Destination" = /media/gregory/BACKUPS (might be under /media/BACKUPS)
* Check box to not create new directories (it will just do exact copy of source)
After initial backup, either make a new task for syncing, or modify the backup task to turn it into a syncing task instead. Then use that periodically to update the backed-up /home/gregory.
I've downloaded Luckybackup and have been experimenting with it but I'm still not sure the best way to go about using it as a backup. Like in gold finger's advice, why would I check the box to not create new directories? It seems to me doing it without checking the box re-creates things just the way they are on my computer. When I check the box it just takes everything out of the folders. Seems confusing (and unnecessasry). And I have a really hard time finding the errors after a run and when I do find them I do I don't know what they mean. And so if I backup the source destination it makes an exact copy on my destination drive (with folders if I don't check the box, without if I do). Then if I do that as an ongoing thing, I will be backing up all my data with each run (which I'm assuming would be much more time consuming), whereas if I choose 'syncrhonize source and destination' it will only backup the changes in my source and usb drive (which would be my destination drive)?
Is that the idea?
And I noticed that Lucky did not want to transfer things with colons in them. Googling around somebody said that problem would be taken care of by switching to ext 3 or ext 4 for formatting the destination drive (as gold finger suggested). Is this a good idea? (I've always felt comfortabel with FAT because if I needed to plug my flash drive into Microsoft it would work (as well as with Linux).)
So the first time I use Lucky I choose "backup source inside destination" and of course the source and destination. Should I check the "Do NOT create extra directory" box? (Again, that seems off as 95% of what I'll be backing up is in folders.)
Then after I've done that, I choose the snyc option?
A lot of stuff. I know. Thanks.
PS. As a slight complication I have the data (basically the "home" folder) of my two computers (work and home) synced via Copy.com.
(BTW I'm running Xubuntu 15.04)
I'm starting to understand Luckybackup. And gold_finger said:
Quote:
Assuming your Xubuntu filesystem is Ext4, example of doing initial backup would be something like this:
* Spare USB with large partition formatted as Ext4 and labeled "BACKUPS"
I know the EXT4 is more friendly to Linux but all my flash drives are FAT32 (and I'll be backing up to those flash drives) and I'd really like to keep them that way (because sometimes I do plug them into Windows machines--and I know FAT32 works with both Windows and Linux). So is there any reason I would have to use Ext4 and not FAT32 in backing up stuff in LuckyBackup?
I confess to great ignorance about the difference between the EXT and FAT formats. Like if I do format a flash drive to EXT 4 and want to plug the flash drive into a Windows computer it just doesn't work? Like, what's the advantage to using EXT4 then if FAT 32 works with Linux and Windows? What are the disadvantages to using EXT4?
Thanks.
What I did in windows was create images of my drive and restore them.
in linux I am running
Code:
rsync -aAXv --exclude={"/home/*","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /* /path/to/backup/folder
and this creates a folder for me with all my files, and apparently saves meta data like permissions and paths...
Since I'm using arch and things break sometimes,I'm booted into a CLI with errors and cannot figure my way out since I'm a noob... would I be able to just delete my entire root and replace it with the rsync backup without a problem?
hey guys, I understand how dd works. Ive been researching on it and I see that
if="" is the source of what I want to copy
of="" is that target that I want to send the back up to
so, what i am trying to do it copy my hard drive from saturn server: /dev/sda1 over SSH to my backup server skyline.
so [saturn harddrive /dev/sda1] -----backup---> [skyline:/backup/directory]
this is the command I am using
sudo ssh willc86@skyline "dd if=/dev/sda1 " | dd of=/backup/saturn.img
however, it is saying failed to open /dev/sda1/ : permission denied and I am runnign as root. any idea why?
thanks a bunch guys!
I'm running Xubuntu and it was a challenge just getting Copy.com on there. (I installed the desktop app on both of my computers.) Now that I have it though, I don't really know how to use it.
I know this is kind of more a Copy.com question, but I don't know anything about Copy.com (besides having it--lol) and besides, I like you LQ guys.
So yeah, I installed the desktop app for Copy.com on both of my computers. I know that if I put something in the Copy folder that will be available to both computers.
But how Copy does the backing up I don't know.
When I change a file or folder do I have to plop that into the Copy folder every time or does Copy somehow update the file or folder in the Copy folder automatically? (It doesn't seem to.)
Okay, when I, say, take the Documents folder from one computer and plop it into the Copy folder that's that. Then I take the Documents folder from the other computer and plop that into the Copy folder, then all the files from both folders will be in the Copy folder (and the Copy cloud), right?
Now I just removed a couple of files from a folder and copied and pasted the folder into the Copy folder. But then when I looked at the Copy folder the files I'd deleted were still there. What's the process? How does it work?
I mean, how does this work as a way of backing things up AND organizing things? To me it seems like a decent way of throwing stuff into the Copy folder (and cloud), but how is that different than Google Drive? I mean, that's not really a backup, is it? It's like a flash drive in the cloud.
And when I combined the same folders (with the same titles anyway, but they each had different files within them) from the two computers I'd expected each folder on each computer to have all the same files that were cumulatively on both. Instead, they're the same. And the cumulative is only on the Copy folder.
I like the notion of just throwing the folders and files into the Copy folder. It's much quicker than Google Drive. But the backing up feature eludes me and the syncing feature makes me fearful that I'll lose data or that the files will become hopelessly less organized.
Thanks.
Hello everyone,
I recently had an issue where I lost my whole backup server due to an electrical overload causing my server to literally explode and fried all 4 of my terabyte drives.... needless to say, I have no more backups because of this, and everywhere I read about backups said that setting up a raid array would allow me to keep good backups.... boy did I learn this lesson the hard way in needing to have some sort of external backup option, which brings me to this post and my questions:
I'm using Ubuntu 14.04 LTS server on an older Dell Poweredge 600sc, and I was thinking of using WD Passport 1Tb external drives to be used as my "offsite" backup option. I don't have a lot of data, and my current backup schedule is only a weekly backup, so thinking that if I have two of these passport drives so that I can have one drive offsite and one attached to the server, and rotate them every 4 weeks so as not to loose all my data.
Here's my question: Ideally, I would love to just be able to unplug the current drive, plug in the new drive and have everything work. However, I don't see this actually working, but if there's a way to do this, that would be totally awesome.... ;-)
So, realistically, I know I will have to unmount the one drive, unplug it, then plug in the new drive and mount it on the system. Is there a way to mount this to the same mount point automatically so that I don't have to rewrite my backup script each time I swap drives out so that the backups go to the same mount point? Or will the UUID's get messed up each time I do this?
Hopefully this makes sense and an easy solution can be found to accomodate this idea.....
Thanks again for all your help. This site is awesome for newbies such as myself........
Mikey
Hi all,
I am trying to write a script that syncs files from source to destination. I have one centralized server that can ssh to any servers without pw. Now when I run the script, it can ssh to source server perfectly fine, but you need to enter password for destination server. Was wondering how I can clean this up before I start using case statements
Below is a sample I wrote
#!/bin/bash
#This scripts syncs shit
echo "Type in ID: "
read ID
echo "Type in Server : " #source server
read S
echo "Type in Destination Server: "
read DS
if [ $S == 9 ]; then
ssh -t "root@"$S"webserver1" "rsync -av /home/rlui/"$ID "root@"$DS"webserver2:/home/rlui/";
ssh -t "root@"$S"webserver1" "rsync -av /home/rlui/tmp/"$ID "root@sl"$DS"webserver2:/home/rlui/tmp/"
exit 1
where S and DS are cluster numbers
I apologize in advance if I am not clear on anything
I'm just getting into Bash scripting, and would appreciate some help with this question. My music collection is split into a smaller, "active" set, kept on my laptop, and a much larger collection on an external hard drive. I've just converted some of the larger filetypes on my "active" set to *.mp3, and now want to move all the original files (*.flac) to the external hard drive. I need some help putting together a command or script that will recursively search my active music set for *.flac and then move them, but keeping the source directory structure. Some or all of these subdirectories may not exist on the destination.
eg. On the active music set, I may have:
/Music/artist1/album1/(a mix of *.mp3 and *.flac files)
/Music/artist2/album1/(a mix of *.mp3 and *.flac files)
and on the hard drive
/Music 2/artist1/album2/(the contents of the album)
So when copying, it'll need to create "/album1/" in "artist1" on the destination, and also "/artist2/album1/"
Thanks in advance!
Hi,
I have a /backup directory which contains 5 days worth of backups:
2015-03-12_03-01-07
2015-03-11_03-01-07
2015-03-10_03-01-07
2015-03-09_03-01-07
2015-03-13_03-01-07
I need to copy the content of the NEWEST backup. Ideally, I'd like to find newest backup and assign it to a variable for later use.
I know I can
Code:
ls -lt
to sort directories, but how do I pick the first one and store it into a variable?
Thanks,
Please forgive me but I'm a little new to Red Hat (RHEL 5). I'm using rysnc to backup critical data and to a second disk; here is what I'm typing at the command line rsync -rvgal /data/disk1/share /data/backup/share. It appears that the softlinks are not transfered to the backup drive and some of the links point to data not located in the source folder (/data/share). After reading the rsync man page I was a little confused about the L option (vs the l option). In order to ensure that the linked files are moved should I type the below:
rsync -rvgaL /data/disk1/share /data/backup/share
A million thanks,
Johnny Mac
I am using Rsync to backup files to a another machine, the users on my fileserver do not exist on the backup server so Rsync throws errors about the permissions. It copies the files fine but I want to get rid of the errors and have Rsync ignore the permissions when backing up.
/backup is a mounted ftp directory
Below is the current command and output:
Code:
root@Fileserver:~# rsync -av --delete /shared/fileshare/ /backup/backup
building file list ... done
created directory /backup/backup
./
manager/
manager/chironfs.txt
manager/cronman.txt
manager/curlftpfs.txt
manager/curlman.txt
manager/getnetaddress.txt
manager/grepman.txt
manager/rsyncman.txt
manager/tarman.txt
public/
user1/
user10/
user2/
user3/
user4/
user5/
user6/
user7/
user8/
user9/
rsync: chown "/backup/backup/manager/.chironfs.txt.c6MbJ7" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.cronman.txt.hdBG4P" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.curlftpfs.txt.t1sG4L" failed: Operation no t permitted (1)
rsync: chown "/backup/backup/manager/.curlman.txt.6oWPoW" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.getnetaddress.txt.V8z8Kk" failed: Operatio n not permitted (1)
rsync: chown "/backup/backup/manager/.grepman.txt.REh4WW" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.rsyncman.txt.ho8VNM" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.tarman.txt.BkcmeS" failed: Operation not p ermitted (1)
sent 211115 bytes received 274 bytes 6710.76 bytes/sec
total size is 210263 speedup is 0.99
rsync error: some files could not be transferred (code 23) at main.c(977) [sende r=2.6.9]
root@Fileserver:~#
I tried the flag to adding the no flag to -p but it still didn't work, see below:
Code:
root@Fileserver:~# rsync -av --no-p --delete /shared/fileshare/ /backup/backup
building file list ... done
./
manager/
manager/chironfs.txt
manager/cronman.txt
manager/curlftpfs.txt
manager/curlman.txt
manager/getnetaddress.txt
manager/grepman.txt
manager/rsyncman.txt
manager/tarman.txt
public/
user1/
user10/
user2/
user3/
user4/
user5/
user6/
user7/
user8/
user9/
rsync: chown "/backup/backup/manager/.chironfs.txt.6Q3eP2" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.cronman.txt.FC8Orx" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.curlftpfs.txt.mlVSN9" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.curlman.txt.vlJ4b1" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.getnetaddress.txt.LXmft0" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.grepman.txt.SVuaye" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.rsyncman.txt.KTNYqA" failed: Operation not permitted (1)
rsync: chown "/backup/backup/manager/.tarman.txt.zcU90c" failed: Operation not permitted (1)
sent 211115 bytes received 274 bytes 7686.87 bytes/sec
total size is 210263 speedup is 0.99
rsync error: some files could not be transferred (code 23) at main.c(977) [sender=2.6.9]