Open Zardoz89 opened 8 years ago
Compression of 200GB (plus metadata) can take a long time, I guess this is normal. Your config looks ok too. In order to speed up things, you might give a try with the stream_compression branch, which enables more compression algorithms (pigz
, pbzip2
and lzo
), e.g.:
raw_target_compress pigz
raw_target_compress_threads 8
You might also want to run btrbk --progress
to get an idea on how long it will take to complete (unfortunately there is no way to see the size of the complete send-stream, which can get considerably bigger than 200GB depending on your data structure, e.g. if you have many small files).
If you run btrbk -l debug
, I'll be able to help if you run into trouble.
(I'm in vacation, be prepared for delayed answers...)
Finally, I used gzip and I run it each Sunday (so, I don't need to worry much about the time that takes it). I did a test of importing a gzip image of a smaller volume on a virtual machine. 100% success :+1:
Only I need to setup all so I keep only the last backup file. We keep do a dump to backup tape of these directory (on incremental basis), so I don't need to keep older backup files beyond a week.
Now, If would be a way of getting to a old snapshot without the need of doing umount/mount cycle by hand and nearly stopping the machine, would be great.
Now, If would be a way of getting to a old snapshot without the need of doing umount/mount cycle by hand and nearly stopping the machine, would be great.
I don't understand this question. Why do you need to umount/mount to access snapshots in the first place?
I don't understand this question. Why do you need to umount/mount to access snapshots in the first place?
If I did my research correctly, to do a rollback to a previous snapshot, I need to :
This 3 steps, could be quickly executed by a script, but If I'm restoring, for example, the whole /var directory, I need to stop nearly all the services when I'm doing this.
If you're replacing /var
, it's obvious that you have to restart all the services which access files there.
Instead of mounting a subvolume to /var
, you could also have /var
be a (nested) subvolume, then you could simply move it away and replace it by a backup without remounting. But the problem remains, in order to make sure that your running services close their already opened files and use the new folder (=subvolume in this case), you'll have to restart them.
I'm just trying btrbk with a server where we have some developer tools (Nexus, Jenkins, GitLab, FishEye), where we need a better robust backup system. Sadly, our backup "server" are shared folders of a Windows server via CIFS with DFS, that we have mount with mount.cifs , so we need to use the target raw option.
I try first with a small subvolume to see if this all works. No problem. However when I try it with the big volume (200GiB), btrbk looks like become freeze executing gzip compression over the output file. I wait like a bit more that an hour. This is normal that takes too long time ? I ended aborting it.
Config File : btrk.conf.txt
DryRun output :