Open MasterG87 opened 3 years ago
I have the same issue. For me it is also one of the .db files, it is allright to some point, but when I add more files from the same share and the file grows, it starts getting the 413 error at some point. What is weird, the file is then around 200MB big so I would not expect this to be a particular problem with the size.
I am afraid, that it is a problem with WebDAV implementation on OneDrive side, as I found documents saying there is a request limit that depends on type of subscription and for business it is around 3G but for home it might be way lower. It looks like that files bigger than that moving limit have to be uploaded in chunks and assembled on the remote side.
I am not quite sure whether it is possible to implement such functionality in this code, because as far as I understand it it is only doing authentication proxy and after authentication is done it passes over requests as they arrive.
I have found a third party solution that looks promising, but am not sure if this would help. Currently use OneDrive private with 1TB storage limit and for me the problem starts around 250-300MB file size. Good that the backup itself is chunked in 50MB files.
A bit more googling showed me, that rclone supports OneDrive with its passport authentication and chunking, and it also supports setting up a webdav server. It seems to be able to provide a feasible alternative to this great, yet limited solution https://rclone.org/commands/rclone_serve_webdav/
A bit more googling showed me, that rclone supports OneDrive with its passport authentication and chunking, and it also supports setting up a webdav server. It seems to be able to provide a feasible alternative to this great, yet limited solution https://rclone.org/commands/rclone_serve_webdav/
wow, this looks like a great replacement of this project, if rclone connect to onedrive directly then there won't have problem about the webdav limit of onedrive! I have found the post about that the container proxy have chance to damage the backup. I guess its talking about this webdav proxy server but I'm not sure. rclone might be better placement for everyone else...
Well - at least I managed to switch successfully to rclone and except for one small quirk, it runs flawlessly. This quirk is - having slow upload you cannot limit the write cache size. Synology writes faster than your uplink can handle and thus your local cache grows. this might be a problem however just when running initial backup. Therefore do not select all folders right away, but do this one by one having enough cache available. Rclone just cannot throttle synology backup to write slower. But with those settings the cache files get purged right away after being uploaded and it works like a charm:
After initial configuration with: docker run -it --rm -v /volume1/docker/rclone/config:/config rclone/rclone:latest -vv config
I run the docker container with rclone like this: docker run --name rclone-backup-gw -d -v /volume1/docker/rclone/config:/config -e TZ=Europe/Warsaw -p 8080:8080 rclone/rclone:latest -v --log-file=/config/rclone.log serve webdav --addr :8080 --vfs-cache-mode full --vfs-cache-max-size 1024M --poll-interval 0 --vfs-cache-max-age 0h0m10s --user xxx@xxx.xxx --pass XXXXXXXXX onedrive:
Then in hyperbackup I use http://synology_ip:8080/ and voila! it works like a charm. Feel free to change /volume1/docker/rclone/ to any shared folder of your choice. The cache will stay in container root filesystem (it is being written to /root/.cache or something similar) so make sure you have enough storage available where you run docker.
Oh wow, that exactly what I just trying to do yesterday to speed up uploads speed. I have already setup the docker container on my own. But still thanks for share the Docker setup anyway :D I setup up the docker container with host network mode and listen to 127.0.0.1 to prevent the expose of the webdav proxy outside the Nas. It really useful with rclone's multi transfer can used up all of my bandwidth, but the vfs write cache of rclone just simply explode my cache folder what ever I setup the config. Then I have to pause the backup and switch back to pure proxy server config(with vfs off and slower, unstable upload speed). Just like you said, this only be a problem for initial backup ,I spent like two days to backup around 300GB data to the cloud.
The written throttle issue have been discuss here but I didn't find any further information about the update about this.
@bartowl There is something catch my eye, one of the discuss mention that the rclone will freeze the writing until the space release. It give me the ideas if add space limit on the cache folder, it may work as write throttle. But I feeling won't be a stable behavior....🤔
Well, this is exactly not the case with --vfs-cache-mode full, because it takes what is written by synology first into the cache. I have found out, that even limiting the cache to 1G with --vfs-cache-max-size 1024M did not stop the cache directory to grow to few GB of size. You can of course disable write caching, and then it will throttle synology... but synology cannot deal with this well and catches timeouts which lead to aborted backup jobs. So no perfect solution here either. But I have found this to be a problem only with slow uplink and initial full backup. Later backups usually get enough time to flush the cache buffer thus improving overall backup speed as the bandwidth is maxed out the entire job time... Just keep a bit more disk space available..
@bartowl I tried to follow your setup and managed to start the docker container like this:
docker run --name rclone-backup-gw -d -v /volume1/docker/config/rclone:/config -e TZ=Europe/Berlin --restart=always -p 8080:8080 rclone/rclone:latest -v --log-file=/config/rclone.log serve webdav --addr :8080 --vfs-cache-mode full --vfs-cache-max-size 1024M --poll-interval 0 --vfs-cache-max-age 0h0m10s onedrive
I also managed to set up a task in Synology's HyperBackup via WebDav, using a folder called test
and my credentials. I tried to execute a MariaDB backup, but I can't seem to find it in OneDrive. Do you happen to have any pointers? All I can see in the rclone logs, were these errors:
2021/07/27 22:07:34 ERROR : {test/MyNas_1.hbk/synobkpinfo.db}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:07:34 ERROR : {test/MyNas_1.hbk/synobkpinfo.db}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:07:35 ERROR : {test/MyNas_1.hbk/synobkpinfo.db}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:07:44 ERROR : {test/MyNas_1.hbk/Control/lock/lock_keep_alive.@writer_version_0.some-uuid}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:07:44 ERROR : {test/MyNas_1.hbk/Control/lock/lock_keep_alive.@writer_version_0.some-uuid}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.4.1.backup.only_upload.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.5.1.backup.remove_conflict.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.6.1.backup.commit.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.4.1.backup.only_upload.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.5.1.backup.remove_conflict.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.1.2.6.1.backup.commit.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
2021/07/27 22:08:43 ERROR : {test/MyNas_1.hbk/Control/@writer/v1.2.-1.0.0.none.none.cinfo}: Expecting fs.Object or fs.Directory, got <nil>
Any pointers are highly appreciated.
EDIT
docker run --name rclone-backup-gw -d -v /volume1/docker/config/rclone:/config -e TZ=Europe/Berlin --restart=always --net=host rclone/rclone:latest -v --log-file=/config/rclone.log serve webdav --addr :8080 --vfs-cache-mode full --vfs-cache-max-size 1024M --poll-interval 0 --vfs-cache-max-age 0h0m10s --user=rclone --pass=verySecure onedrive:
Makes more sense, but unfortunately the ERRORs persist.
EDIT1
Turns out type=onedrive
was missing from my config file!
I am glad that you found a solution for my initial problem back then. Yesterday I also tried to set everything up on my Synology. But I ran into problems, that nothing gets uploaded in my Onedrive.
I ran the following commands:
sudo docker run -it --rm -v /volume1/docker/config/rclone:/config rclone/rclone:latest -vv config
Then I had to complete the Ondedrive setup process in rclone.
Second command:
sudo docker run --name rclone-backup-gw -d -v /volume1/docker/config/rclone:/config -e TZ=Europe/Berlin --restart=always --net=host rclone/rclone:latest -v --log-file=/config/rclone.log serve webdav --addr :50000 --vfs-cache-mode full --vfs-cache-max-size 1024M --poll-interval 0 --vfs-cache-max-age 0h0m10s --user=rclone --pass=verySecure onedrive:
The command ran into problems, that rclone-backup-gw already existed in my Docker containers. Therefore, I renamed the existing one, created by the first command.
Then set up a new Hyperbackup job with the webdav server http://synologyip:8080/ Hyperbackup did create the correct *.hbk locally on my webdav server, but did not upload anything to my Onedrive. What did I do wrong? I can also attach my rclone.log file if required.
After initial configuration with: docker run -it --rm -v /volume1/docker/rclone/config:/config rclone/rclone:latest -vv config
@bartowl could you please share with us your initial configuration? Did you use a application password for the auth with onedrive?
I'm sorry. It was my fault... I have office 365 and 1 TB of storage and I selected "Microsoft Cloud Germany". I had to select "Microsoft Cloud Global".
Hello @stiller-leser
This may not be the right place to ask, but I couldn't find any information on this elsewhere: I do get the same error messages you cited in the log, but the backup seems to work (my config contains type=onedrive
and partial restore was successful).
Did those messages completely disappear in your log file ?
Hi all,
I have the same problem with Hyperbackup and OneDrive using the basic-to-passport-auth-http-proxy: Server error: status='413', msg = 'Request Entity too large' Did anyone figure out what the exact problem is. From what I can see the files Hyperbackup creates are 50Mbyte max. So it shouldnt be the file size limitation from OneDrive (https://support.microsoft.com/en-us/office/restrictions-and-limitations-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa) Is this maybe a limitation with the number of files? But then chunking won't solve the problem....
Cheers, Florian
Cheers, Florian
@fbloemer : I have the same issue. Hyperbackup creates .db
files and they may exceed 50mb chunks. My current one is 340Mb and webdav hyperbackup has started failing because of this size. You can verify this by viewing the logs /var/log/messages
around the time of the "suspend" event and then view the size of the file (ls -lah FILENAME
)
I tried following the above instructions for rclone and continually had issues using hyperbackup for large datasets to onedrive.
I was getting a lot of errors such as:
transfer_webdav.cpp: remove failed: -560, File is locked
and related errors (I no longer have the logs) about very large files being missing in the vfs
cache and webdav timing out.
I have it working now by turning vfs off completely. So in case it may help someone else with hyperbackup and rclone, the following docker-compose
is currently working for me:
version: '2'
services:
rclone-servers:
container_name: rclone-1
image: rclone/rclone:latest
stdin_open: true # docker run -i
tty: true # docker run -t
restart: unless-stopped
user: $UID:$GID # Run as non-root
ports:
- "127.0.0.1:8080:8080"
volumes:
- /volume1/docker/config/rclone:/config:rw
- /etc/localtime:/etc/localtime:ro # timezone
- /etc/passwd:/etc/passwd:ro # make custom non-root user known to container
command:
["serve", "webdav", "OneDrive:", "--addr", ":8080", "--vfs-cache-mode", "off",
"--poll-interval", "0", "--config", "/config/rclone.conf"]
EDIT (two days later):
After running the above I found I was getting slow upload speeds with certain types of files. My new command that produces no errors (fingers-crossed) and is 10X faster is below. Basically, I turned on writes
and increased the chunk size.
["serve", "webdav", "OneDrive:", "--cache-dir", "/cache", "--addr", ":8080",
"--vfs-cache-mode", "writes", "--poll-interval", "0", "--config", "/config/rclone.conf",
"--onedrive-chunk-size", "128000k"]
@fbloemer : I have the same issue. Hyperbackup creates
.db
files and they may exceed 50mb chunks. My current one is 340Mb and webdav hyperbackup has started failing because of this size. You can verify this by viewing the logs/var/log/messages
around the time of the "suspend" event and then view the size of the file (ls -lah FILENAME
)
Thanks for the feedback @catchdave ! I ran my backup again which just fails only at around 95% (and 660GB of 1TB capacity of the OneDrive):
2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: transfer_webdav.cpp:240 send_file failed (-500,Request Entity too large) Failed to sendFile at [DiskStation2_2.hbk/Config/@Share/Backup/1.db.2] 2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: (9528) file_transfer.cpp:799 failed to sendfile to [DiskStation2_2.hbk/Config/@Share/Backup/1.db.2] from [/volume1/@img_bkp_cache/webdavBackup.JSdzpz/DiskStation2_2.hbk/Config/@Share/Backup/1.db], data_hot=[1], error = [3008] 2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: (9528) file_transfer.cpp:642 failed to sendfile: trg_relative path: [Config/@Share/Backup/1.db], err: [3008] 2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: (9528) cloud_upload_controller.cpp:2143 failed to sync upload file [Config/@Share/Backup/1.db], ret[37] 2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: (9528) cloud_upload_controller.cpp:2271 failed to upload file: [/volume1/@img_bkp_cache/webdavBackup.JSdzpz/DiskStation2_2.hbk/Config/@Share/Backup/1.db] --> [DiskStation2_2.hbk/Config/@Share/Backup/1.db], blRemoveAfterUpload: [0], dataClass: [1] 2022-06-10T09:23:41+08:00 DiskStation2 img_worker[9528]: (9528) cloud_upload_controller.cpp:1935 failed to do the job
The file where the upload fails is 3.8G in the local img_bkp_cache. But 3.8G is way below the maximum file size outlined on the Microsoft website: https://support.microsoft.com/en-us/office/restrictions-and-limitations-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa#individualfilesize Also there was no issue to upload this file through the OneDrive web interface.
Therefore it looks like it is an undocumented limitation of the WebDAV access of the Office 365 Family Plan OneDrive.... This post discusses the same problem: https://forum.rclone.org/t/onedrive-webdav-413-request-entity-too-large/17091
I really wanted to avoid chunking since it makes disaster recovery much harder :(
Are you sure it makes recovery harder? Why don't you set it up and try a restore.
I use hyperbackup with rclone and have tested a restore and worked fine for me.
Are you sure it makes recovery harder? Why don't you set it up and try a restore.
I use hyperbackup with rclone and have tested a restore and worked fine for me.
once it is chunked you cannot simply download the files and put them on a USB hard drive if you need to recover the backup. rclone need to be running to be able to restore the backup. But I guess there is no other choice :)
Yes, that is true. And yes that is part of the issue. Two dependencies here are hyperbackup and rclone. You do make a good point.
Maybe the reality is we need to switch to something else like crashplan ($120 for unlimited data) 🤷
Can someone share their working docker command and hyper backup set up?
i used this command to create the docker but i cannot setup Hyper Backup as it cannot connect to the backup target
This works for me at the moment.
rclone docker-compose:
services:
rclone:
container_name: rclone-1
image: rclone/rclone:latest
stdin_open: true # docker run -i
tty: true # docker run -t
restart: unless-stopped
user: $UID:$GID # non-root user and group
ports:
- "127.0.0.1:3001:8080"
volumes:
- /host/path/to/config/rclone:/config:rw
- /host/path/to/cache/rclone-cache:/cache:rw
- /etc/localtime:/etc/localtime:ro # syncronize host/container timezone
- /etc/passwd:/etc/passwd:ro # For non-root users/group set above
command:
["serve", "webdav", "OneDrive:", "--cache-dir", "/cache", "--addr", ":8080",
"--vfs-cache-mode", "writes", "--poll-interval", "0", "--config", "/config/rclone.conf", "-vv",
"--onedrive-chunk-size", "128000k", "--vfs-cache-max-age", "0h5m0s"]
My hyper backup is webdav, with target: http://localhost:3001
.
I use my onedrive username and password.
This works for me at the moment.
rclone docker-compose:
services: rclone: container_name: rclone-1 image: rclone/rclone:latest stdin_open: true # docker run -i tty: true # docker run -t restart: unless-stopped user: $UID:$GID # non-root user and group ports: - "127.0.0.1:3001:8080" volumes: - /host/path/to/config/rclone:/config:rw - /host/path/to/cache/rclone-cache:/cache:rw - /etc/localtime:/etc/localtime:ro # syncronize host/container timezone - /etc/passwd:/etc/passwd:ro # For non-root users/group set above command: ["serve", "webdav", "OneDrive:", "--cache-dir", "/cache", "--addr", ":8080", "--vfs-cache-mode", "writes", "--poll-interval", "0", "--config", "/config/rclone.conf", "-vv", "--onedrive-chunk-size", "128000k", "--vfs-cache-max-age", "0h5m0s"]
My hyper backup is webdav, with target:
http://localhost:3001
. I use my onedrive username and password.
do i need to change the $UID and #GID to the ones i use on my synology? (PGID=101, PUID=10xx) i changed the volumes to this:
volumes:
- /volume1/docker/rclone1/config:/config:rw
- /volume1/docker/rclone1/cache:/cache:rw
created a rclone.conf file via the how-to on rclone and the docker logs show: rval" "0" "--config" "/config/rclone.conf" "-vv" "--onedrive-chunk-size" "128000k" "--vfs-cache-max-age" "0h5m0s"] 2022/06/18 18:27:36 DEBUG : Creating backend with remote "OneDrive:" 2022/06/18 18:27:36 DEBUG : Using config file from "/config/rclone.conf" and the docker restarts
what did i do wrong?
From the log file it looks like you didn't configure your onedrive backend in rclone (because it said it's creating a backend). I used the name OneDrive
personally for the configuration endpoint I setup in rclone. When you setup you will also need to name it that or modify above.
I'd suggest reading the docs for rlcone which walks through how to configure the onedrive. The setup for rclone is interactive (i.e. you will need a browser somewhere).
Thank you all for having posted the rclone solution. I got it working today. The only issue was the config part that I wasn't able get done in Synology, couldn't access the auth proxy and had to create the conf file on my windows machine. :(
The only downside of using rclone is that you need 1 container for each OneDrive account. Using the skleeschulte solution, you only need one. I still didn't looked at all documentation, but did anyone succeeded to use 1 container for several accounts?
Could you please provide detailed end to end instruction on how to configure Hyper Backup to OneDrive via local rclone? There are some config files above, but that is not enough to understand the entire process.
I realized that there is a reoccuring problem with a specific Hyperbackup to Onedrive (Docker: basic-to-passport-auth-http-proxy). Some backups work fine, this doesn't. Do you have any idea how to fix the following error from the /var/log/message from my Synology NAS. Seems to be that one file might be too large for the webdav setting. Do you know how to solve this?
`cloudstorage/protocol/webdav/webdav-error.cpp(126): Undefined error : 500 img_worker: webdav-protocol.cpp(168): Server error: status='500', msg = 'Undefined http status code'
img_worker: transfer_webdav.cpp:235 need retry #0: send_file failed: -9900, Undefined http status code
img_worker: [28672]cand_chunks.cpp:206 Warning: two candidate chunks have the same checksum!!
img_worker: [28671]cand_chunks.cpp:206 Warning: two candidate chunks have the same checksum!!
img_worker: cloudstorage/protocol/webdav/webdav-error.cpp(126): Undefined error : 500
img_worker: webdav-protocol.cpp(168): Server error: status='500', msg = 'Undefined http status code'
img_worker: transfer_webdav.cpp:235 need retry #0: send_file failed: -9900, Undefined http status code
img_worker: webdav-protocol.cpp(168): Server error: status='413', msg = 'Request Entity too large'
img_worker: send_file:240 failed(-500,Request Entity too large) Failed to sendFile at [BackupMyNAS.hbk/Config/@Share/Backup/2.db.3]
img_worker: (27942) file_transfer.cpp:786 failed to sendfile to [BackupMyNAS.hbk/Config/@Share/Backup/2.db.3] from [/volume1/@img_bkp_cache/webdavDrive4.9Nrrd6/BackupMyNAS.hbk/Config/@Share/Backup/2.db], data_hot=[1], error = [3008]
img_worker: (27942) file_transfer.cpp:629 failed to sendfile: trg_relative path: [Config/@Share/Backup/2.db], err: [3008]
img_worker: (27942) cloud_upload_controller.cpp:2141 failed to sync upload file [Config/@Share/Backup/2.db], ret[36]
img_worker: (27942) cloud_upload_controller.cpp:2269 failed to upload file: [/volume1/@img_bkp_cache/webdavDrive4.9Nrrd6/BackupMyNAS.hbk/Config/@Share/Backup/2.db] --> [BackupMyNAS.hbk/Config/@Share/Backup/2.db], blRemoveAfterUpload: [0], dataClass: [1]
img_worker: (27942) cloud_upload_controller.cpp:1934 failed to do the job img_backup: (27853) backup_controller.cpp:1364 Failed to [CMD_END]: response:[ST_TRANSFER_UNKNOWN], resume_status: [RESUMABLE]
img_backup: (27853) backup_controller.cpp:1505 failed to handle packet from cloud uploader
img_backup: (27853) backup_controller.cpp:1528 failed to handle worker response
img_backup: (27853) backup_controller.cpp:3693 INFO: abnormal terminate: blWorkerAlive[0], blWaitCloudUploaderEndResponse[0], blWaitBackupEnd:[0]`