Open junrue opened 6 years ago
Are CRASHPLAN_SRV_MAX_MEM and "java mx [MaxMem], restart" actually independent values?
CRASHPLAN_SRV_MAX_MEM
and the java mx [MaxMem], restart
command are controlling the same thing.
Since my backup is 6TB, code42 would tell me I need to give Java 6GB RAM. Since my Syn has 6GB that leaves 0B for the Crashplan container (and any other service). Am I differentiating these correctly?
To find the correct amount of memory to set, I would start with 4GB and see if CrashPlan crashes (you get a pop-up window in this case). If yes, then increase it to 4.5GB and check for crashes. Increase until it stops crashing. Like this, you will be able to find the minimum amount of memory required. It will probably be less than 6GB.
1b. I understand I have to remove and re-install the container to change SRV_MAX_MEM. I'm new to Linux and containers and don't understand how I would save my container's setup and config so that re-installing would go more quickly (or perhaps this isn't possible).
Everything CrashPlan writes (config, cache, etc) is done under /config
in the container (/volume1/docker/AppData/CrashPlan
on the host). So as long as you create the container with the same mapping for /config
, you can delete and re-create the container without problem.
I have 30Mbit upload bandwidth. I watch the DSM bandwidth counter and it's rarely rising above 150KB/s. I've set upload bandwidth to unlimited in settings. Any other tips to improve this?
Because of deduplication, upload rate is known to be slow. However, this is not a good indication of the speed at which the data is backup. For example, uploading 1MB could be required to backup 10MB of data. So you should be watching the rate at which the backup progress. You can look at the history (Tool->History) to have an indication of the "effective" upload rate.
Very helpful, thank you. I'll set Java memory down and slowly increase if I get a failure pop-up. So far, here's the process I've followed in transitioning from Windows --> Synology.
As I go through the logs History I can see where I updated my selections and dedup caused nominal upload to be very good. There are some recent points, though, where I can see nominal upload at <=500Kbps. I'm pretty sure I'm past (or very close) to the end of dedup being effective and that I'm uploading new content. Is this History log stored somewhere so that I could dump it into a spreadsheet and read it more easily? I couldn't figure out how to copy/paste its contents within the container UI. Does CP throttle upload speeds? Even on Windows I don't think I ever got about 7-10Mbps upload.
Minor thing: I noticed that the History timestamps are in UTC. I'm in Pacific, which means 3AM UTC (maintenance time for the CP container) hits at 10PM Pacific, during a time I'm likely using the unit. I've set DSM to Pacific time, but assuming the container doesn't know this. Any tips?
Regarding upload speeds, you might want to check the de-duplication settings on crashplanpro.com. I have yet to figure out where (if) those settings are manageable per device, but you can set those to minimal and you're likely to have higher upload speeds. However, that may not be an optimal solution.
In crashplanpro.com, go to "Settings" > "Device Backup" > "Backup" tab > "Advanced settings" section > "Data de-duplication" > set to "Minimal".
Further tinkering was possible with the past Home version of CrashPlan (link below describes this), but I don't know if it's possible with the new SMB Code42 apps: https://crashplanbackup.com/crashplan-performance-tips/
Is this History log stored somewhere so that I could dump it into a spreadsheet and read it more easily? I couldn't figure out how to copy/paste its contents within the container UI.
Yes, you can find the history at /volume1/docker/AppData/CrashPlan/log/history.log.0
.
Is this History log stored somewhere so that I could dump it into a spreadsheet and read it more easily? I couldn't figure out how to copy/paste its contents within the container UI.
Set the timezone via the TZ
variable. See https://github.com/jlesage/docker-crashplan-pro#environment-variables for more details.
It might be nice if the container documentation called out that the MAX_MEM variable and the Java mx setting are the same thing. I guess this fact implies that changing the Java value via CP's container console (Ctrl-Shift-C) is a dynamic change, rather than having to destroy and reinitialize the container to change MAX_MEM.
@acseven, I adjusted the dedup setting at cp.com and pushed the value to my device. I reinitialized the container for good measure. Upload speed is still hanging below 180KB/sec. I'm a little worried about further reducing dedup's function. Most of the files I'm backup up are fairly large (multi-gigabyte), already compressed, and unchanging. Conversely, there are 1.5TB of FLAC audio and many thousands of JPG's (much smaller files). I'll leave the settings as is and check the logs to see what my nominal throughput looks like.
@jlesage, I got the time zone fixed - thank you. I also reduced memory consumption and will move up slowly if the container crashes.
In my history log, except for outliers where the backup process doesn't run long (because I'm changing settings), I find that the effective rate for uploading is around 1Mbps.
What do you think about messing with the niceness variable? This container isn't fighting for resources on the Syn, but I wondered if a negative niceness might affect upload rate. A snip of my history file is attached.
It might be nice if the container documentation called out that the MAX_MEM variable and the Java mx setting are the same thing. I guess this fact implies that changing the Java value via CP's container console (Ctrl-Shift-C) is a dynamic change, rather than having to destroy and reinitialize the container to change MAX_MEM.
Thanks for the suggestion, I will update the documentation. But note that setting the variable is safer since you are sure to not lost the setting (after an upgrade, config change, etc).
In my history log, except for outliers where the backup process doesn't run long (because I'm changing settings), I find that the effective rate for uploading is around 1Mbps.
The effective rate seems to be (wrongly) calculated on the uploaded data, not on the amount of backup data. For example:
I 04/22/18 07:59PM [7efbc8f5e345 Backup Set] Stopped backup to CrashPlan Central in 0h:24m:30s: 27 files (2.60GB) backed up, 192.10MB encrypted and sent (Effective rate: 1.1Mbps)
(192.10 8) / ((24 60) + 30) = 1.045mbps
If we use the amount of data that has been backup: (2.60 1024 8) / ((24 * 60) + 30) = 14.489mbps
So I would say that the effective rate in this case is 14.489mbps, not 1.045mbps.
What do you think about messing with the niceness variable? This container isn't fighting for resources on the Syn, but I wondered if a negative niceness might affect upload rate. A snip of my history file is attached.
I don't think the niceness will change anything, since CP is not CPU-limited.
Since changing the time zone yesterday and recreating the container, I've watched effective rate in the history. It's almost always 1.1Mbps but, as you note, I agree that's it's being calculated incorrectly. cp.txt I've attached just the few events since the time zone change. It looks like dedup is still being somewhat efficient.
However, I'm not sure I understand the low bandwidth usage. If dedup causes a file not to be sent then shouldn't the files that do need to be sent happen rather quickly? The time estimate for completing was 31 days all day yesterday. Today (after 3AM maintenance), it's not 60+ days and no files have changed on the Synology. Maybe that estimate is broken, too. Maybe dedup takes just as much time to calculate CRC and verify files as it does to send the file (I'm kidding). 😄
Oh, one other thing that I thought of for helping you provide support. The contents of History are accessible via crashplanpro.com --> Devices --> Active --> [click the name of your device] --> History tab
This makes it easier than trying to copy/paste from the docker container or SSH.
Is there any way to see in real time what file is currently being uploaded?
Oh, one other thing that I thought of for helping you provide support. The contents of History are accessible via crashplanpro.com --> Devices --> Active --> [click the name of your device] --> History tab
Thanks for the tip!
Is there any way to see in real time what file is currently being uploaded?
I think there is no such thing ... :(
The Windows client used to display the current file in a "details" pull-down on the status screen. I miss that feature. It helped highlight large files (e.g. vhdx) that I didn't expect and didn't want in the backup.
I am not sure but I think the history log, as well as "backup_files.log.n" in config/logs only receive an entry when a file task is completed or failed, not when it starts. Your question would a good one to pose to CODE42.
I wouldn't hold my breath though, they seem to be more interested in deprecating (e.g. peer-to-peer), rather than developing new features. With that said, it's still a really good backup service. Although if they deprecate versioning, I'll put them into our past.
So I completed 24 hours of backup without interrupting anything. 12GB Sent, 22GB backed up in 23:59:49 Real rate: 1.1Mbps, Effective rate: 2.04 Mbps
Seems abysmal doesn't it? I have about 1TB left to complete. Maybe I should take it to my friend's house who has 1Gbps upload and leave it there for a few days to check logs?
Last question for now and then you can close this issue, @jlesage. What is an average upload rate that people are getting to CrashPlan? Does anybody consistently see >=10Mbs? Just wondering how far from average my 1.1Mbps is.
I would say that I'm not surprised by a such average. A lot of people find that the upload is slow.
The last thing you could do is contact CrashPlan support team and ask about this. Maybe they could provide better explanation and/or solution.
Ugh. When I upload straight into Azure blob storage or AWS Glacier I can saturate my whole upload bandwidth. But, it's way more expensive than CP. I'll contact them and see if they'll help. Thanks for the guidance, everyone. You can resolve this issue if you'd like.
Found this thread while facing the same issue as the OP. After increasing the RAM in my machine and setting the EV for allow for appropriate amount of RAM it still was running around 1Mbps upload. I finally discovered that the web interface on the crashplan site has a network bandwidth setting for "Bandwidth limit when user is present" that was set to 1Mbps. When I set this to "none" my upload immediately went up to 5MB / s on the Synology resource manager.
Apparently Crashplan running under docker was being read as the user always being active and was activating this throttle. I had only run Crashplan on a linux box prior to using docker so hadn't run into this before. Just posting here in case someone else is pulling their hair out trying to sort this out.
Just wanted to bring attention to @djkarn105 comment above which ended up being the solution to my slow upload speeds. My only guess is that these setting were automatically applied to accounts that migrated from personal to business pro -- whenever that happened -- because I never remember setting these throttling limits. Once I made this change my backup which was estimated to take about 72 days recalculated to just over 15 days and my NIC throughput now hovers at about ~500Mbs/second where it was normally ~100Mbs/second
I've gotten my container setup and dedup is helping me quickly catch up to my previous Windows CP installation. Prior to now, I've been uploading for about 16 months without ever 100% completing a backup (6TB).
Here's how I launched my container for the first time: root@NAS:~# docker run -d \
I have increased the inotify as per code42's instructions as well.
Thanks for your time!