JonathanTreffler / backblaze-personal-wine-container

Run the Backblaze personal backup client in a docker container
https://hub.docker.com/r/tessypowder/backblaze-personal-wine
GNU Affero General Public License v3.0
371 stars 34 forks source link

ERR_NotificationDialog_bad_bzdata_permission #38

Open Rowdy opened 1 year ago

Rowdy commented 1 year ago

Backup is working on my Synology. Unfortunately every time I start the docker container, I get this weird error message. Any idea what that's about? 2022-12-01 at 20 53 25 - Backblaze Personal Backup Thanks in advance

jmejiaperez commented 1 year ago

i am also experiencing this problem, and when i get this error message the app wont upload anything at all.

Rowdy commented 1 year ago

You‘re right. The interface is showing some activity but if I check back with iOS console it looks like it‘s not backing up at all. Can anyone help out here guys? Much appreciated 🙏🏻

B2AD2051-F41F-40BB-A0AD-5BFC8949025F

Rowdy commented 1 year ago

i am also experiencing this problem, and when i get this error message the app wont upload anything at all.

Which system are you on and which Tag are you using?

jmejiaperez commented 1 year ago

i am also experiencing this problem, and when i get this error message the app wont upload anything at all.

Which system are you on and which Tag are you using?

i am using unraid with the following tag tessypowder/backblaze-personal-wine

Rowdy commented 1 year ago

Interesting. I‘m on a Synology here. Have you find any workaround yet?

Sent from Proton Mail for iOS

Am Montag, 12. Dezember 2022 um 06:59, jmejiaperez @.***> schrieb:

i am also experiencing this problem, and when i get this error message the app wont upload anything at all.

Which system are you on and which Tag are you using?

i am using unraid with the following tag tessypowder/backblaze-personal-wine

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>

jmejiaperez commented 1 year ago

sadly no. i havent had the chance to look at why this is happening but it just randomly happens. i do not know enough about the software for me to look. I guess i should do that.

jmejiaperez commented 1 year ago

looks like i had to add the user id and group id to the docker container so that it can read the files. once i added that, the error message went away and a lot of the missing files that wasnt upload began to be uploaded.

Rowdy commented 1 year ago

you mean adding it at the start of the docker container? would you mind sharing your start command? ------- Original Message ------- On Tuesday, December 13th, 2022 at 21:07, jmejiaperez @.***> wrote:

looks like i had to add the user id and group id to the docker container so that it can read the files. once i added that, the error message went away and a lot of the missing files that wasnt upload began to be uploaded.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>

jmejiaperez commented 1 year ago

well for unraid its best to use the user id and group id of root which you can get just by doing id on the terminal so i just did -e USER_ID=0 -e GROUP_ID=0

jbscout commented 1 year ago

Same issue

Synology image: tessypowder/backblaze-personal-wine:ubuntu18

Rowdy commented 1 year ago

well for unraid its best to use the user id and group id of root which you can get just by doing id on the terminal so i just did -e USER_ID=0 -e GROUP_ID=0

Have you tried another UID? And have you been able so solve it?

JonathanTreffler commented 1 year ago

I am not familiar with Synology.

The uid/gid needs to be a user on the host, that can access all data and all config files.

For unraid root (0/0) is likely the best option and it is working for my personal setup.

For Synology I have no way of testing, but my first debugging step would be to find out the default user and group of files on the storage and using the uid and gid of those.

Sadly permissions with docker are already a complex topic and wine adds another layer of complexity.

To skip the wine complexity you could try to enter a shell inside the container with the uid/gid you are currently testing (docker exec -u uid:gid [docker name] bash) and test if you can read and write the files inside the data folder.

It could also be the case, that the permissions are generally fine, but just a few files or folders inside the data directory have differing permissions (For unraid the "Fix common issues" Plugin might be helpful to overwrite all permissions and file owners correctly for all files, but this could break other docker configurations, that rely on these wrong permissions, because their uid/gid also wrong).

A last resort for unraid could be enabling the "Privileged" toggle (I don't know if there is an equivalent in the Synology UI). This give the docker full access to EVERYTHING on the host (including all devices and system files), so it could theoretically wreak havoc and I have not tested that myself.

Generally I reccommend looking at other issues in this repo, as many people have already run into and solved this issue.

nj2359 commented 1 year ago

I am having the same issue. "ERR_NotificationDialog_bad_bzdata_permissions_Msg" every time the docker is started. Click close and backups will upload until the error reoccurs within roughly a 2 hour timeframe. Sometime after the error reappears, backups are halted until the error message is closed again. I'm using environment variables USER_ID=0 GROUP_ID=0 and in conjunction tried UMASK=000. I've created new docker containers without mounting any other volumes and still get that error every time. I'm running the docker on OpenMediaVault 6.1.1-1, which runs on Debian 11. I've also tried on OMV 5 (Debian 10). It would be great if someone could help resolve this. If this can be figured out, this is the most cost effective backup solution. However, it is not viable with this issue. I'm happy to help troubleshoot if someone with the expertise can direct me.

nj2359 commented 1 year ago

I am having the same issue. "ERR_NotificationDialog_bad_bzdata_permissions_Msg" every time the docker is started. Click close and backups will upload until the error reoccurs within roughly a 2 hour timeframe. Sometime after the error reappears, backups are halted until the error message is closed again. I'm using environment variables USER_ID=0 GROUP_ID=0 and in conjunction tried UMASK=000. I've created new docker containers without mounting any other volumes and still get that error every time. I'm running the docker on OpenMediaVault 6.1.1-1, which runs on Debian 11. I've also tried on OMV 5 (Debian 10). It would be great if someone could help resolve this. If this can be figured out, this is the most cost effective backup solution. However, it is not viable with this issue. I'm happy to help troubleshoot if someone with the expertise can direct me.

For more info, here is the docker compose file: `--- services: backblaze: image: tessypowder/backblaze-personal-wine:latest container_name: backblaze_personal_backup2 init: true environment:

Rowdy commented 1 year ago

I tried running the container with the mapped root user which results in no different experience than with my "normal" mapped docker user. One thing that happened to me two days ago is, that the icon in the left upper corner switched to a red stop circle instead of a green hook?! I don't know if that implies something. Backup runs "normally" as long as the container is fully working.

Right now my only solution is to restart the container every 3 hours. Unfortunately the task scheduler on the Synology seems kind of only working from time to time. Maybe it's the container. I dunno.

nj2359 commented 1 year ago

In settings, I've set the temporary data drive to another mounted volume rather than the default of c: I'm not sure if this is why, but I'm getting close to 24hrs before I have to restart the container. The other variable is that backblaze is now working on larger files in the 300 - 500MB range. I don't usually see the bzdata_permissions pop up anymore. Backblaze will just be hung on a specific file and not uploading anymore. I'm monitoring bandwidth through my firewall so I know when it is doing nothing. If you look at the latest _bz_done_latesttimestamp for example, /config/wine/drive_c/ProgramData/Backblaze/bzdata/bzbackup/bzdatacenter/bz_done_20230115_0.dat the line at the very bottom will be the last file it was working on. It is always the file that shows in the backblaze gui interface when it is hung. This way I can make sure that file actually gets uploaded sometime in the future after the container is restarted. It appears the timestamps in the beginning of the line in the bz_done files use GMT time. This way you know the last time a file was being processed.

Does anyone know where the bzdata_permissions error is logged so maybe I can get additional information?

nj2359 commented 1 year ago

To add to my earlier post, when I no longer see bandwidth being consumed for the upload, I'll check the properties on the .bzvol directory which is at the root of the drive set as the "temporary data drive" in settings If the number of files and total size of that folder is not changing, neither growing or shrinking, I know the software is doing nothing and the container needs to be restarted. However, I have caught it where no bandwidth was being consumed while it was processing a large file, as the number of files was growing in .bzvol and the next file listed in bz_todo (a file that was hundreds of gigs), which is the next file after the last file listed in bz_done.... was being processed into chunks. Using the log files bz_done and bz_todo is how I know what file is being processed, as the gui may not always show the current file.

bz_done location: /config/wine/drive_c/ProgramData/Backblaze/bzdata/bzbackup/bzdatacenter/bz_done_20230115_0.dat I'll look at the last line for the file path that was processed. Whatever the path of the last line in bz_done, I'll search that path in the bz_todo file. bz_todo location: /config/wine/drive_c/ProgramData/Backblaze/bzdata/bzbackup/bzdatacenter/bz_todo_20230128_0.dat You know the next file to be processed is the line that follows where that path occurs in bz_todo When you restart the container, that should be the next file to process.

Note, the size of .bzvol when processing a large file is not proportional to the size of the file being processed. It is way way smaller. For instance, when processing a 368GB file, there was roughly 25,000 files in .bzvol which equaled around 11MB. The 25,000 files were a bunch of chunk files only around 387 bytes each. You can see the number of files decreasing as data is being uploaded while bandwidth is being consumed. You'll see the number of files increasing when a large file is being prepared for upload and bandwidth might not be used. If there is no bandwidth being used, just pay attention that there is activity on the .bzvol folder. If not, restart the container. As mentioned, I'm getting about 24 hours before I need to restart.