Closed UndeadDevel closed 6 months ago
@tasket I get that too from direct wyng calls since maybe the 10th of May.
I will update my dom0 upgrade script to keep a trace of upgrades and track issues more properly.
I had not tried to use deduplication before, including from wyng itself. Also, am I correct in understanding that in order to deduplicate the archive as a whole I have to use sudo wyng arch-deduplicate --dest=qubes://path/to/laptop.backup?
Yes and then add dedup to your backup/send calls if you prefer longer backup sessions but optimization on archive storage consumed space, or do archive deduplication once in a while if archive storage is not an issue depending on your use case.
You choose between resource consumption here : higher local CPU usage with dedup on send vs more consumed remote archive space + bandwidth resource here.
Similarly to sparse /sparse-write on receive ops.
Basically choosing to use your host CPU (electricity) vs ssd wearing and bandwidth (which might be linked to $$$)
I'm sparse option it's the inverse, you choose more computation on both/tri ends for lower local space consumption, writing in place at the expense of way more calculation.
@UndeadDevel @tlaurion Yes, this bug was recently introduced since Wyng is now recording '0 bytes' sessions for unchanged volumes. It should be fixed now with the Wyng 08wip '20240514' update.
Fixed and can now send deduped backup over qubes-ssh over tor hidden service over custom ssh port over softraid5 on openwrt acm3200. Not fast, but works! Seems like tor+dropbear+ssh script(shell)+python would be happy having more then two slow cores there to not spin on IOWAIT.
@tasket thanks!
When I do:
I get:
I had not tried to use deduplication before, including from
wyng
itself. Also, am I correct in understanding that in order to deduplicate the archive as a whole I have to usesudo wyng arch-deduplicate --dest=qubes://path/to/laptop.backup
?