jlesage / docker-crashplan-pro

Docker container for CrashPlan PRO (aka CrashPlan for Small Business)
MIT License
296 stars 38 forks source link

Changes in mapping volumes with latest Docker #310

Open Skiptar opened 3 years ago

Skiptar commented 3 years ago

Hi all. This is more a comment/information than an issue, but I wanted to share my experience with the community. I hope that's OK...

When Docker on Synology was updated a while back I had a bunch of problems with my /storage mapping so made some changes. I found I was able to use the command line to map my main volume to /storage using the command line "-v /volume1:/storage:ro". The surprise was that this now showed up in the Docker UI, which it hadn't before with the following details:

This week I updated to the latest release to get 8.6.1.3. I did this using the normal stop/clear/download/run approach but found that my volume map had disappeared, so CrashPlan didn't have anything to back up. I had to delete the container and then recreate it using the command line; it's working just fine again.

So not an issue with this image/project, but potentially an issue with Docker on Synology which could cause us problems in the future...

jlesage commented 3 years ago

Yeah from the UI, it was never possible to map /volume1 to a container. Only the shares under it.

Recently, this unsupported configuration seems to cause more problems, like you are describing. From what I heard, even if re-creating the container fixes the issue, this fix is temporary and the problem will come back.

Mapping shares instead of /volume1 seems to be the right approach to avoid any issue.

Skiptar commented 3 years ago

Yes, that will be what I move to next. It just means I have to remember to add the folder to the backup each time I add a new one… Not a massive deal, but I did like the simplicity of adding the whole volume at once.

Thanks! M

PatGitMaster commented 3 years ago

Maybe I'm lucky, but I've had /volume1 mapped since the very beginning using these statements in my docker run command:

-v /volume1/docker/CrashPlan:/config:rw \ -v /volume1:/volume1:ro \

I've just been very careful to not include the docker folder in the backup selection. To me, it's very convenient to not have to rebuild the container when new shares are needed in the backup.

Skiptar commented 3 years ago

I used this for ages and it worked just fine. But with the last Docker update it seems to have become a bit flaky.

halteach commented 3 years ago

JLesage,

Since first using your docker container, I use the export and import to restart the container. What does the command look like to "Mapping shares instead of /volume1 seems to be the right approach to avoid any issue."

halteach commented 3 years ago

J. Lesage,

tis is my current command line command. what would I have to change?

sudo docker run -d --restart always --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -e CRASHPLAN_SRV_MAX_MEM=3072M -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:rw jlesage/crashplan-pro

jlesage commented 3 years ago

@halteach, in your case, instead of -v /volume1/:/volume1:rw, you should use multiple parameters (one for each folder under /volume1 to backup). Also, there is no need to have the mapping as R/W (CrashPlan only needs read access). For example:

-v /volume1/Documents:/volume1/Documents:ro -v /volume1/Photos:/volume1/Photos:ro
syn3rgy01 commented 3 years ago

jlesage - please excuse me as i am quite a novice. after the recent upgrade, all the backuped folders are showing as empty as many others have stated in the forum. my technical level is quite limited.

I am trying to get it working again with the below config however, seem to be having some issues.

docker run -d \ --name=CrashPlan-pro \ -p 5800:5800 \ -v /volume1/docker/appdata/Crashplan-Pro:/config:rw \ -v /volume1/homes:/volume1/homes:ro \ -v /volume1/Photos:/volume1/Photos:ro \ -v /volume1/User Folders:/volume1/User Folders:ro \ -e USER_ID=1049 \ -e GROUP_ID=101 \ -e SECURE_CONNECTION=1 \ -e CRASHPLAN_SRV_MAX_MEM=2096M \ jlesage/crashplan-pro

I get the error: docker: invalid reference format: repository name must be lowercase.

if you could kindly assist it would be greatly appreciated as it will take me forever to upload over 2TB.

Thank you

halteach commented 3 years ago

J. Lesage,

tis is my current command line command. what would I have to change?

sudo docker run -d --restart always --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -e CRASHPLAN_SRV_MAX_MEM=3072M -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:rw jlesage/crashplan-pro

Okay, so I use the ssh command line and ran the command above. The result was a scan that said I had to copy all of my files to the archive (85k files, 375GB) . Ugh!. Once it started running, it only copied the files not in the archive. It then spent about 11 hours going through each file one by one checking to see if the file existed in the archive. If the file was already there, nothing was uploaded and it went to the next file. I have seen block compares before but never a one by one check.

jlesage commented 3 years ago

@syn3rgy01, it's probably the spaces in -v /volume1/User Folders:/volume1/User Folders:ro that causes the issue. Try adding double quotes:

-v "/volume1/User Folders:/volume1/User Folders:ro"
syn3rgy01 commented 3 years ago

Thanks @jlesage

I think it was more a permissions issue.

The final command looked like:

docker run -d \ --name=CrashPlan-pro \ -p 5800:5800 \ -v /volume1/docker/appdata/Crashplan-Pro:/config:rw \ -v /volume1/homes:/storage/homes:ro \ -v /volume1/Photos:/storage/Photos:ro \ -v "/volume1/User Folders:/storage/User Folders:ro" \ -e USER_ID=1049 \ -e GROUP_ID=101 \ -e SECURE_CONNECTION=1 \ -e CRASHPLAN_SRV_MAX_MEM=2096M \ jlesage/crashplan-pro

While trying to figure out the issue i created a couple of test shares. when i look in crashplan now to select the folders to backup, i see all the test shares and even though i have removed them, i can still see them under the /storage/ do you know how i remove these as they no longer exist. the ones highlighted below are what i want to remove.

Capture

just to add, by the looks of it, i think it is backing up everything again as all existing backups were to /volume1/Photos etc and not it all points to /storage/photos

should i rather use the below? and would it prevent the re-upload of all the data?

-v /volume1/docker/appdata/Crashplan-Pro:/config:rw \ -v /volume1/homes:/volume1/homes:ro \ -v /volume1/Photos:/volume1/Photos:ro \ -v "/volume1/User Folders:/volume1/User Folders:ro" \

thanks agian

jlesage commented 3 years ago

Yes, if you want to keep your previous paths, you should use the container path /volume1/homes instead of /storage/homes (for example).

For the removed folders that are still seen, where do you see them ? When you click the Manage files... button ?

meulie commented 3 years ago

Ah, this explains why my backup stopped working!

Thanks for all the useful info above. I'm now working on getting my backup back in working order. I'm running in 1 issue though: I'm unable to add the /volume1/homes folder to the volumes-mapping using the GUI. It doesn't show there.

jlesage commented 3 years ago

Does it show things under /volume1/homes ?

meulie commented 3 years ago

image

The homes-folder doesn't show here. It is present on volume1, and does contain sub-folders.

cmccallu commented 3 years ago

Mapping shares instead of /volume1 seems to be the right approach to avoid any issue.

I manually re-created my container the same as described above e.g. -v /volume1:/storage:ro and while I can see all my files under each backup set via manage files it now appears to what to upload my 7TBs again.

How could this work if I mapped the shares under /volume1?

Thanks Chris

syn3rgy01 commented 3 years ago

I dont think its actually uploading but checking each file. mine did the same and was saying something like 1.8 TB to go and cycling through the files but i think the line below that said something like "uploaded 0mb"

can you paste a screenshot?

SJLBoulder commented 3 years ago

Chris - I confirm what syn3rgy01 stated - CrashPlan is checking each file. Mine has been doing that for 8 days now, and is 71% complete. I've been checking my data usage with my ISP, and I'm convinced that there is no significant amount of data being uploaded. So, I'm sure it's just cycling through all the files looking for something that's not already backed up. 2021-06-14_6-57-03 CrashPlan Client 2021-06-14_6-57-30 CrashPlan report

cmccallu commented 3 years ago

Thanks syn3rgy01 & SJLBoulder really appreciate the response guys! Please find attached a screen shot of one of my Backup Sets! This is a valid file that I created before I re-created my container

CrashPlan
cmccallu commented 3 years ago

My backup seems to be now sorting itself out after a couple of days! As mentioned above it's cycling through all my files but not really uploading any new data.

Just wonder if we will face the same issue with the volume disappearing on the next update?

Thanks All Chris

meulie commented 3 years ago

Who can tell me how I can get my volume1/homes folder back in the backup?

jlesage commented 3 years ago

@cmccallu, when you re-created the container, did you use the same volume mapping ? If the mapping changed (i.e. the path to your data in the container changed), CrashPlan will have to backup everything again. But because of deduplication, nothing will be really uploaded. One way to see if it's the case if to click the "Restore files" button and check if your files are duplicated under different paths. You can also use Tools->History to see what is going on.

jlesage commented 3 years ago

@meulie, I guess that /volume1/homes contains the home folder of all users. Is there any setting in Synology that you need to configure to allow docker access them ?

meulie commented 3 years ago

@meulie, I guess that /volume1/homes contains the home folder of all users. Is there any setting in Synology that you need to configure to allow docker access them ?

Docker runs as root on Synology, so it should have full access to /volume1/homes, and anything under neath AFAIK.

jlesage commented 3 years ago

Yes but Synology software may put additional restrictions (e.g. /volume1 can't be mapped to the container). Does Synology provide any kind of support ? They should be able to explain why /volume1/homes is not available.

meulie commented 3 years ago

I've found an explanation/quick-fix. I was signed in on the GUI as a user with full admin privileges, but not the actual admin/root-user. That account I had disabled as per the recommendations of Synology. When enabling that account and signing in with it, I saw the homes-folder in the list. I've now added it to the container-config, and hope/assume it stays there :)

cmccallu commented 3 years ago

For me using CrashPlan was no longer viable with these issues with my Synology. I have reduced my backup volume and moved to Backblaze B2 with Hyper backup and S3. The performance is night and day and actually cheaper monthly for my reduced backup size!

Skiptar commented 3 years ago

Curious as to the size of your backup. I just considered Hyper Backup on C2 or S3 and the storage price was about 5x what I’m paying for Crashplan. I’m backing up about 5 TB.

On Thu, 26 Aug 2021 at 03:11, cmccallu @.***> wrote:

For me using CrashPlan was no longer viable with these issues with my Synology. I have reduced my backup volume and moved to Backblaze B2 with Hyper backup and S3. The performance is night and day and actually cheaper monthly for me reduced backup size!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-crashplan-pro/issues/310#issuecomment-906019789, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJ5WO7OQ3YINN5POVSQKY4TT6WPDPANCNFSM46GDVEKQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

--


Mark Baker 5 Belgrade Road Hampton TW12 2AZ Mobile: +44 7766 367825

madfusker commented 3 years ago

I have 4TB and that was my experience in pricing as well. Also $$ to get the data back if I ever lost it and my secondary failed.

I am still running CP on QNAP 872XT in docker for several years now without any issues. It's well worth $10 for unlimited storage. Fingers crossed.

cmccallu commented 3 years ago

Curious as to the size of your backup. I just considered Hyper Backup on C2 or S3 and the storage price was about 5x what I’m paying for Crashplan. I’m backing up about 5 TB.

I was backing up about 7 TB and have reduced is down to around 1 TB of critical data. I still backup everything locally. Given the speed uploading not really trusted I would get the data back at a decent speed either!

Avpman2 commented 3 years ago

I'm curious Has anyone set up a second NAS at a friend or relative's house, or your office and duplicated your NAS to it at night? You wouldn't need RAID redundancy necessarily at the remote location. Just space. If you ever lost more data than you could easily download from the remote NAS, just go over and pick it up. I'm looking into that scenario. I'd load up the remote NAS locally to save time duplicating to it initially then take it ver to my buddy's house.

madfusker commented 3 years ago

I don't use a secondary NAS for backups because of expense. I use a Pi device connected to external USB drive, and use rsnapshot as my backup tool. It does versioning and everything. So primary backup is Crashplan, secondary is remote Pi with rsnapshot.

kensagle commented 3 years ago

Crashplan used to have this feature, but removed it when they got rid of the free offering. I used to use it, and it worked very well. You could just use another CP instance associated to your account as a destination.

FEndo commented 3 years ago

Thank you all for this post!

I had faced the same problem for the 3rd time, thinking I was doing something wrong just simply reinstalled the container and ran the whole process for 4Tb again.

Now for the 3rd time with the same problem, I thought there might be someone else with the same problem. :-) Now I just added each folder to the Container, by the Docker GUI itself. And apparently everything is working.

FEndo commented 3 years ago

I would like to invite everyone to Sponsor @jlesage at https://github.com/sponsors/jlesage. He saves us lots of money and time.

I just did it. 😉

uberwrensch commented 3 years ago

I would like to invite everyone to Sponsor @jlesage at https://github.com/sponsors/jlesage. He saves us lots of money and time.

I just did it. 😉

Thanks for the FYI. I had been wanting to show my appreciation in some small, but material way. Just did.

FEndo commented 3 years ago

I would like to invite everyone to Sponsor @jlesage at https://github.com/sponsors/jlesage. He saves us lots of money and time. I just did it. 😉

Thanks for the FYI. I had been wanting to show my appreciation in some small, but material way. Just did.

👍👏👏