jlesage / docker-crashplan-pro

Docker container for CrashPlan PRO (aka CrashPlan for Small Business)
291 stars 38 forks source link

Need basic help with installation - need more details #34

Open bgmess opened 6 years ago

bgmess commented 6 years ago

I am fairly new to Linux and Docker. I need more instructions that those given in the quick start:

Launch the CrashPlan PRO docker container with the following command:

docker run -d \ --name=crashplan-pro \ -p 5800:5800 \ -p 5900:5900 \ -v /docker/appdata/crashplan-pro:/config:rw \ -v $HOME:/storage:ro \ jlesage/crashplan-pro

Where do I enter the above commands?

Is this all I need to do?

Any tips would be greatly appreciated. Not sure how to begin. I have installed Docker on the Synology NAS, but that's as far as I got.

Thanks, Brian

beBerlin commented 6 years ago

@bigtfromaz excuse me please. How and where should I enter the command?

bigtfromaz commented 6 years ago

Try this. Open a command window on your host where you are able to run docker commands. I use putty to open a terminal to my Synology. If you don't know how to use putty, search for it on Bing or Google. There is lots of help there.

Once you are logged in, if you don't know your container name, you can list it with this command:

sudo docker container ls

Once you have your container name, enter this command:

sudo docker exec -i -t jlesage-crashplan-pro1 /bin/sh

Note that my container name is "jlesage-crashplan-pro1" . You should use your container name if it's different than mine.

The docker exec command should give you a prompt that looks something like this: /tmp #

At this point, your are "in" the container. Run these commands and send the output.

ifconfig ip route

To summarize, use putty to open a command prompt on your Synology. Log into the Synology using your administrator user id and password. Determine your container name. Issue the "sudo docker exec" command to get a command prompt inside the container. Enter the ifconfig and ip route commands and post the output. This will tell us about the network configuration inside you container.

beBerlin commented 6 years ago

@bigtfromaz Thank you so much. Here are the screenshot of the commands. https://www.dropbox.com/s/sgohojwwmhem6l5/Ohne%20Titel%206.jpeg?dl=0 Is it correct that way?

bigtfromaz commented 6 years ago

@deberlin That's what I wanted to see. If you look at the output you can see that your container only has access to IPv4 addresses, e.g. 172.17.0.0/16, yet previous posts show that your host machine is IPv6 only. Your container is isolated from everything except other containers that are in the same bridge network, no internet access.

I think the next step is to connect your container to the host network and not the default bridge network.

One way might be to stop the existing container, execute these commands and then restart the container. I don't have a lot of experience with Docker networks so perhaps someone will add guidance if needed.

sudo docker network disconnect bridge crashplan-pro sudo docker network connect host crashplan-pro

Another way, is if you use the GUI to create the container, delete the container, go to the image, recreate the container and under Advanced Settings and select "Use the same network as the Docker host". Once the container is running we'll want to see ifconfig results from "inside" the container again.

You could also recreate the container using docker run and use the --net host parameter.

I know that @jlesage recommends bridge networks but IPv6 is a very different addressing scheme, and much safer. If you have residential service from your ISP, your network prefix can and will change from time-to-time which will make it very difficult to keep the container connected to the CrashPlan service using a bridge. You will have to change the bridge prefix by hand each time your prefix changes. If your ISP has assigned you a static IPv6 prefix then you could create a bridge but I really don't see the point for a stand-alone container like this one.

Let me know how it goes.

beBerlin commented 6 years ago

It's awesome. I work. Now I have only one more question. How do I add folders from the DiskStation? At HUI, only funny orders and files are sent to me. I probably made a mistake during the installation. I almost want to secure my complete DistStation at Crashplan. Can I add the order ncoh later?

I thank you once again very much at all.

bigtfromaz commented 6 years ago

Are you starting with the GUI or a command? Send the command you used to run the container for the first time, or screen snaps of your configuration screens.

beBerlin commented 6 years ago

Below the link you can see a screenshot. I have had these orders.

https://www.dropbox.com/s/3k1ohjm7895dfh7/Ohne%20Titel%205.jpeg?dl=0

bigtfromaz commented 6 years ago

It looks like you are mapping the host /volume1 to the container /volume1. The container is configured to backup data stored in its /storage directory.

So change -v /volume1/:/volume1:ro to -v /volume1/:/storage:ro . This will make your Synology's volume1 visible to the container in /storage.

Let's see how that goes.

beBerlin commented 6 years ago

Can you tell me how to enter the commands without removing the containers and reinstalling?

bigtfromaz commented 6 years ago

No new installation needed. No need to pull the image again. Just remove the container. The command to remove the container would be docker container rm yourContainerName To list your containers use docker container ls -a YourContainerName would be the --name value you supplied when creating the container with the docker run command.

There is no need to clear the config folder. Change your docker run command as I suggested and run that command again. Then check to see if the files show up.

bigtfromaz commented 6 years ago

@beBerlin It appears that you do not yet have a lot of experience with Linux, Docker, or CrashPlan Pro. If you are doing this as a learning experience then keep reading and learning. If you are doing this to manage a mission critical backup then I suggest that you consider switching to the GUI mode for creating and using this container on your Synology. The last thing you want is to find out your backup hasn't been running when you go to do a restore. @jlesage has some good Synology notes in the readme for this container related to Synology.

I am pretty experienced in many technologies but chose to manage Docker on my Synology using the GUI. I do this for convenience. This way I have fewer scripts create and maintain. @jlesage has created a very useful, well thought out and pretty complex image build. However, the Docker-run time requirements for this container are pretty basic, no swarms or complex firewalls required in Docker itself. The GUI is convenient, reliable and meets the needs for a simple deployment like this one.

beBerlin commented 6 years ago

Yes, I have no idea. That's why I ask for your help. Unfortunately, it still does not work for me. I do not understand how that can show the complete vo Volume1. For me are only funny folders that I know niche. What information do you need to help me? Maybe someone can help me with TeamViewer. I almost finished the installation now. And do not want to give up now.

bigtfromaz commented 6 years ago

@beBerlin

How many file shares do you have set up on your Synology?

PNMarkW2 commented 6 years ago

Not an installation question as such, but I've been getting this message every time I've gone to check on my backup for the past couple of weeks.

"Routine maintenance. Backup will resume when maintenance completes."

Anyone else seeing this, or know it's meaning?

jlesage commented 6 years ago

It’s normally not an issue. See https://support.code42.com/CrashPlan/6/Troubleshooting/Destination_unavailable, section “Under maintenance”.

beBerlin commented 6 years ago

I have 15 file shares.

aagrawala commented 6 years ago

Hi @excalibr18 & @jlesage, need some help... I initially did the setup successfully on Feb 4th this year. Everthing was working fine until about 26 days ago. So, somehow the backup is not running as reported by Crashplan in their emails to me and the web (see the attached snapshot).

screen shot 2018-07-01 at 7 33 35 pm

Do we need to download the image from the Registry section again to get the latest Crashplan? Or do something else? Do we have steps to be able to update Crashplan?

When I check the app by putting my IP address with 5800 port in my browser address field, I see that Crashplan is not able to update but the backup seems to have completed! (see the second snapshot)

screen shot 2018-07-01 at 7 08 14 pm

Thank you in advance! Anil

excalibr18 commented 6 years ago

@aagrawala, Crashplan pushed an update but the Docker image doesn't automatically apply it. You'll have to manually update the Docker image. Instructions for updating the Docker image can be found at: https://github.com/jlesage/docker-crashplan-pro#synology

PNMarkW2 commented 5 years ago

I need to change one of the environment variables. I did this once before, but it's been many months. My memory is I had to remove the container and recreate it with the new value. Is that correct? Seems rather drastic, but that's what I remember.

Thanks in advance. Mark

jlesage commented 5 years ago

Yes that's the way to go. Note that even if the container is re-created, data/state associated to CrashPlan is not lost (assuming it is re-created with the same parameters as before, with the exception of your environment variable).

PNMarkW2 commented 5 years ago

Thanks. So I did that, but now I can't get to the web interface. I'm going to https://192.168.xxx.xxx:5800/ just like I've always done, but I get the message "This site can’t provide a secure connection" and "192.168.xxx.xxx sent an invalid response."

jlesage commented 5 years ago

Is the non-secure address working (http://192.168.xx.xx:5800)?

PNMarkW2 commented 5 years ago

Yes, it is, I feel silly for not trying that. That's what I get for doing this right before I needed to head out so I was. Changing it to HTTP allowed me to see what's happening, and it now says it is synchronizing block information, which I assume is the correct state of affairs at this point. Like I said, it's been a long time since I last had to do this.

Is there a reason the https would not work when it would before? I was using a saved link to the web interface so I know it was the same link I used before I deleted the Docker container and recreated it with my new variable info.

jlesage commented 5 years ago

If HTTPs access was working before, it means that you probably forget to set the SECURE_CONNECTION to 1 when recreating the container?

PNMarkW2 commented 5 years ago

Very possible, I could not find my original script for setting u the container.

Another issue that's come up, it finished the synchronizing block information, but it oddly thinks it's done and there is nothing to be backed up, which is not true, there is plenty to be backed up. Online I have the opposite, Code 42 is warning me there has been no activity in over 13 days (which is what prompted my needing to change the CRASHPLAN_SRV_MAX_MEM). Something does not seem to be synced between the container and online.

jlesage commented 5 years ago

In Crashplan, for your device, when you click Details->Manage Files, do you see your files?

PNMarkW2 commented 5 years ago

That would be no. When I click on Manage Files I can see the folders that should contain files, but when I click into that folder there are no files visible.

PNMarkW2 commented 5 years ago

Any thoughts on why my local install doesn't show any files, even though it does show my folders?

gatorheel commented 5 years ago

I had this when my new container arguments were wrong. It looked like I could see my folders, but it was really just showing me the structure from my prior backup. I would double-check your -v paths to make sure you have them correct.

jlesage commented 5 years ago

Likely a permission issue. Did you set the USER_ID and GROUP_ID to the same value as your original container?

PNMarkW2 commented 5 years ago

Okay, I think between the two of you that you've helped highlight where I went wrong. I'm going to have to delete the container and recreate it to update some of the variables. I'll let you know the results.

PNMarkW2 commented 5 years ago

On the plus side, I can see the full directory structure and the files on my network now. The downside seems to be that it's acting like it's starting over instead of resuming where it left off. I saw this because it claims to have backed up only 13GB of a multi-TB dataset. Now maybe it's only reporting what it's done since I restarted it, but it reads as if we're back at square one.

PNMarkW2 commented 5 years ago

I tried again to delete the container and recreate it, only this time I read somewhere to remove any files from the system related to the old container. So yes, for good or for bad I did that. This time it started up much like when I created it the very first time by asking me to log in, but now it's stuck there. It just sits on that screen and says "Signing in..." and it's been that way for 7 hours now.

jlesage commented 5 years ago

Try to restart the container. Also can you post the command line you used to create the container?

PNMarkW2 commented 5 years ago

The Restart seemed to do some good, I had to log in again but I'm able to navigate around, it's not just stuck on "Signing in". However, it still looks like it's started over telling me that it's only 13% complete, that should be more like 50-60%.

As requested here is my startup script. docker run -d \ --name=CrashPlan \ -e USER_ID=0 -e GROUP_ID=0 \ -e CRASHPLAN_SRV_MAX_MEM=3072M \ -e SECURE_CONNECTION=1 \ -p 5800:5800 \ -p 5900:5900 \ -v /volume1/docker/appdata/crashplan:/config:rw \ -v /volume1/:/volume1:ro \ --restart always \ jlesage/crashplan-pro

jlesage commented 5 years ago

Looking at the progression doesn't tell if your data is actually uploaded or deduplicated. You can look at the history (Tools->History) for more details.

rekuhs commented 5 years ago

I'm having trouble mapping additional volumes, the first one is fine and works perfectly but I have volumes 2, 3, 4 and 5 that I need to map.

This works and gets Vol1 mapped docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro

But adding to that to map the additional volumes doesn't docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro -v /volume2/:/volume2:ro jlesage/crashplan-pro -v /volume3/:/volume3:ro jlesage/crashplan-pro -v /volume4/:/volume4:ro jlesage/crashplan-pro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Using that returns the following error: docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"-v\": executable file not found in $PATH".

The Container is created but can't be started.

I will admit that I don't really know what I'm doing

Can anyone help?

jlesage commented 5 years ago

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro
rekuhs commented 5 years ago

You have too much jlesage/crashplan-pro in your command line. It should be the last argument:

docker run -d --name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro -v /volume2/:/volume2:ro -v /volume3/:/volume3:ro -v /volume4/:/volume4:ro -v /volume5/:/volume5:ro jlesage/crashplan-pro

Thank you :)

THX-101 commented 5 years ago

I have a question about permissions: should I use -e USER_ID=0 -e GROUP_ID=0, or is this a security risk? Am running this on a Synology. Also am I or am I not supposed to run the "docker run -d ..." command as sudo?

jlesage commented 5 years ago

You are running as root when using USER_ID=0 and GROUP_ID=0, which is not considered as a good practice. So if you are able, you should use a user/group that has permission to access the files you want to backup.

To create the container, you have the choice: either you manually run the docker run command, or you use the Synology UI.

THX-101 commented 5 years ago

When using putty to build my containers, I log in with my Synology admin user, but I'm always required to run the docker run command with sudo. That is the way it is supposed to be, right?

And concerning not running as root, what would be best practice? Make a 'docker' group and a 'docker' user, that has read/write in both the /volume1/docker folder and /volume1/share and nothing else?

jlesage commented 5 years ago

Correct, the docker run command needs to be run as root, via sudo.

And yes, creating an additional user with restricted permissions is a solution.

THX-101 commented 5 years ago

And one other question: I just ran the docker with -e USER_ID=0 -e GROUP_ID=0 for the very first time (switched from Windows client). I suppose it is best to let Crashplan finish synchronising block information before I tear down the container, right?

jlesage commented 5 years ago

I think it should not be a problem to stop the container before it ends the synchronization. Once you re-start the container, CrashPlan should just continue where it left off.

THX-101 commented 5 years ago

Thanks jlesage, you are a one-woman-tripple-A-customer support. Very much appreciated ! 🥇

edit: So sorry for being so sexist. I assumed you were a guy.

PNMarkW2 commented 5 years ago

I recently had to reset my Synology from the ground up, which of course meant having to reset CrashPlan. So after I got the Synology running I added Docker followed closely by adding CrashPlan. Having had to reset CrashPlan once before I made sure to keep the setting I used.

docker run -d \ --name=CrashPlan \ -e USER_ID=0 -e GROUP_ID=0 \ -e CRASHPLAN_SRV_MAX_MEM=3072M \ -e SECURE_CONNECTION=1 \ -p 5800:5800 \ -p 5900:5900 \ -v /volume1/docker/appdata/crashplan:/config:rw \ -v /volume1/:/volume1:ro \ --restart always \ jlesage/crashplan-pro

But having done this all CrashPlan will do is constantly scan files. It goes for a while then restarts over and over. Today is 27 days since it performed any sort of backup. I've tried to delete it to reinstall, but when I try that I'm told that there are containers dependant on CrashPlan and it won't let me.

Any thoughts or suggestion would be welcome.

Thank you

Mark

jlesage commented 5 years ago

I would try to look at the history (View -> History) and at /volume1/docker/appdata/crashplan/log/service.log to see if there is anything obvious. Also, I guess you are running the latest docker image?

PNMarkW2 commented 5 years ago

Like taking a car to a mechanic, it spent a month scanning for files and seems to be "happy" now.

Under Tools -> History all it would appear to complain about it failing to upgrade to a newer version. As I said I had to re-do my entire Synology, That involves downloading and installing Docker again, so I would assume it is the latest and greatest version available. I also had to get the CrashPlan package again, and again I would have assumed that to be the latest as well, but maybe not since it's trying to update so soon.

Right now it looks like it's backing up files, what still seems off though is it doesn't seem to think anything was backed up previously like it didn't sync with my previous backup after the reinstall. That doesn't quite match what I see when I log in to view my archive online, there it shows a good chunk of data with my last activity within the past 24 hours.

I'm still a bit confused, but it seems to be running, maybe. :-)

jlesage commented 5 years ago

For your information, I just published a new docker image containing the latest version of CP.