Open justaboveaverage opened 2 years ago
Are you using default template settings ?
Yes, absolutely. Just fully removed it and reinstalled it. The only setting I checked/changed was the location of the "storage", which on Unraid is /mnt/user
Coud you provide the output of the following command:
docker exec -ti CloudBerryBackup ls -la /
Sure:
total 8 drwxr-xr-x 1 root root 106 Mar 6 09:40 . drwxr-xr-x 1 root root 106 Mar 6 09:40 .. -rwxr-xr-x 1 root root 0 Mar 6 09:40 .dockerenv drwxr-xr-x 1 root root 17 Feb 4 12:30 bin drwxrwxrwx 1 root root 83 Mar 6 00:03 config drwxr-xr-x 1 root root 27 Feb 4 12:30 defaults drwxr-xr-x 5 root root 340 Mar 6 09:40 dev drwxr-xr-x 1 root root 115 Mar 6 09:40 etc drwxr-xr-x 2 root root 6 Nov 12 23:14 home -rwxr-xr-x 1 root root 389 Mar 2 2018 init drwxr-xr-x 1 root root 17 Dec 30 11:49 lib drwxr-xr-x 2 root root 82 Mar 2 2018 libexec drwxr-xr-x 5 root root 44 Nov 12 23:14 media drwxr-xr-x 2 root root 6 Nov 12 23:14 mnt drwxr-xr-x 1 root root 32 Feb 4 12:30 opt dr-xr-xr-x 775 root root 0 Mar 6 09:40 proc drwx------ 2 root root 6 Nov 12 23:14 root drwxr-xr-x 1 root root 51 Mar 6 09:40 run drwxr-xr-x 1 root root 130 Dec 30 04:59 sbin drwxr-xr-x 2 root root 6 Nov 12 23:14 srv -rwxr-xr-x 1 root root 248 Feb 4 12:26 startapp.sh drwxrwxrwx 1 99 users 117 Mar 6 10:12 storage dr-xr-xr-x 13 root root 0 Mar 6 09:40 sys drwxrwxrwt 1 root root 132 Mar 6 09:40 tmp drwxr-xr-x 1 root root 19 Feb 4 12:26 usr drwxr-xr-x 1 root root 41 Dec 30 11:50 var
And this one ?
docker exec -ti CloudBerryBackup mount
Nothing returned with that one..
On Mon, 7 Mar 2022 at 14:43, Jocelyn Le Sage @.***> wrote:
And this one ?
docker exec -ti CloudBerryBackup mount
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1060159145, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5UL44MTCDRI36YAV4LU6V3OZANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
Humm it should...
You can also try to login to the container with docker exec -ti sh
and run the command from there.
I've just reinstalled a completely new container into a completely fresh location (removed old CloudBerryBackup folder), just to be sure.
Here's what I get:
@.:/mnt/user/appdata# docker exec -ti CloudBerryBackup mount @.:/mnt/user/appdata# docker exec -ti CloudBerryBackup sh /tmp # mount /tmp #
Does it matter what user I use to run the container?
On Mon, 7 Mar 2022 at 14:49, Jocelyn Le Sage @.***> wrote:
Humm it should...
You can also try to login to the container with docker exec -ti sh and run the command from there.
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1060160819, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5RALK5K44FWDB7DGQDU6V4DTANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
You are talking about the User ID
and Group ID
settings of the template? The default values should definitely work.
Which unRAID version are you using ?
Can you try this command:
docker exec -ti CloudBerryBackup df -h
Yeah that User ID
Using Unraid version: 6.10.0-rc2
Ok here is the output:
@.***:/mnt/user/appdata# docker exec -ti CloudBerryBackup df -h Filesystem Size Used Available Use% Mounted on overlay 40.0G 21.0G 19.0G 52% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 31.4G 0 31.4G 0% /sys/fs/cgroup shm 64.0M 0 64.0M 0% /dev/shm shfs 931.1G 248.5G 682.6G 27% /config shfs 46.4T 30.4T 16.0T 65% /storage /dev/loop2 40.0G 21.0G 19.0G 52% /etc/resolv.conf /dev/loop2 40.0G 21.0G 19.0G 52% /etc/hostname /dev/loop2 40.0G 21.0G 19.0G 52% /etc/hosts tmpfs 31.4G 0 31.4G 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 31.4G 0 31.4G 0% /sys/firmware
On Mon, 7 Mar 2022 at 14:56, Jocelyn Le Sage @.***> wrote:
You are talking about the User ID and Group ID settings of the template? The default values should definitely work.
Which unRAID version are you using ?
Can you try this command:
docker exec -ti CloudBerryBackup df -h
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1060162828, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5QC3WPBYCDH3XEAIIDU6V46ZANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
The new unRAID version might cause CloudBerryBackup to not detect the root.
On my setup running 6.9, I have:
/dev/loop2 64.0G 15.8G 46.0G 26% /
This is different from you:
overlay 40.0G 21.0G 19.0G 52% /
I see that there is a new CloudBerry Backup version available. I will integrate it and provide a new image. But if the issue persists with the new image, I would recommend that you contact the CloudBerry support team to report the problem.
Thanks so much, I will try the new CloudBerry image as soon as its available.
is there anything special I need to tell the CloudBerry folks? Do they know this image / container exists? :)
On Mon, 7 Mar 2022 at 15:10, Jocelyn Le Sage @.***> wrote:
I see that there is a new CloudBerry Backup version available. I will integrate it and provide a new image. But if the issue persists with the new image, I would recommend that you contact the CloudBerry support team to report the problem.
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1060169653, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5TKGSS4MA42WXSM32TU6V6VDANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
They know (see https://github.com/jlesage/docker-cloudberry-backup/issues/3), but I'm not sure about how much they really support this scenario. So I would not focus on the fact that CBB is running inside a container ;)
New image is now available.
Just pulled down the new image and re-installed
I got the msg that CloudBerry backup has been upgraded, but unfortunately the GUI still looks the same - no local folders, and no storage folder...
On Mon, 7 Mar 2022 at 15:30, Jocelyn Le Sage @.***> wrote:
New image is now available.
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1060182977, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5WNJ72M64ASLS34ZD3U6WA5NANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
Did you try to contact their support team ?
only recently, I haven't heard back from them yet.
They just replied that they cannot help me since docker is not supported and this github project is not maintained by them.
That's a disappointing answer. I don't think the issue is with the container, but with their software not handling the filesystem correctly.
I think we should be able to use a VM that uses a similar filesystem and report the issue from there...
latest message from MSP360 support:
Hello,
We checked the case with another team, and we can confirm that this project is not officially supported.
However, the more requests we receive from our clients - the sooner we will start considering supporting it.
We have added your email to the list of customers waiting for this feature to be released and forwarded your feedback to our developers.
Please let us know if you have any remaining issues or questions, so we could assist with them.
Our kind regards,
On Mon, 14 Mar 2022 at 23:37, Jocelyn Le Sage @.***> wrote:
I think we should be able to use a VM that uses a similar filesystem and report the issue from there...
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1066736084, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5TAQ6CRXW7SOVOGKLLU74XJVANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
@justaboveaverage Unfortunately I have a very similar problem. I just found this container (thanks so much, @jlesage!) but I can't mount my file tree insidide the storage
volume.
And here comes the weird thing:
My docker-compose
looks like this:
services:
cloudberry-backup:
image: jlesage/cloudberry-backup
container_name: cloudberry-backup
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Berlin
- CBB_WEB_INTERFACE_USER=test
- VNC_PASSWORD=test
volumes:
- "/srv/[...]/Apps/CloudBerryBackup:/config:rw"
- "/srv/[...]/Daten:/storage"
- "/srv/[...]/Daten/SourceDemo:/test/DemoTest" # this is for demonstration purposes
ports:
- 5800:5800
- 43210:43210
- 43211:43211
restart: unless-stopped
When I look at the files inisde the container, everything looks like expected:
But in CBB I only can see my mapped subfolder /test/DemoTest
and it appears at root level, even it is in the subfolder test
. I don't understand this behaviour.
I think this is really a bug of CBB itself, since inside the container evrything is good AND I also can read/write the strange working sub folders.
CBB is not displaying the standard file tree. It seems to show mount points instead...
What do you have if you run docker exec <container name> mount
?
It would be good if you can log a support case with CloudBerry team - per my message previously, they will look at the issue if there is enough demand from users.
On Sun, 20 Mar 2022 at 03:42, jeykodev @.***> wrote:
@justaboveaverage https://github.com/justaboveaverage Unfortunately I have a very similar problem. I just found this container (thanks so much, @jlesage https://github.com/jlesage!) but I can't mount my file tree insidide the storage volume.
And here comes the weird thing:
My docker-compose looks like this:
services: cloudberry-backup: image: jlesage/cloudberry-backup container_name: cloudberry-backup environment:
- PUID=1000
- PGID=100
- TZ=Europe/Berlin
- CBB_WEB_INTERFACE_USER=test
- VNC_PASSWORD=test volumes:
- "/srv/[...]/Apps/CloudBerryBackup:/config:rw"
- "/srv/[...]/Daten:/storage"
- "/srv/[...]/Daten/SourceDemo:/test/DemoTest" # this is for demonstration purposes ports:
- 5800:5800
- 43210:43210
- 43211:43211 restart: unless-stopped
When I look at the files inisde the container, everything looks like expected:
[image: image] https://user-images.githubusercontent.com/10532676/159130082-687d8fc1-35e2-43db-941a-a47b95228e80.png
But in CBB I only can see my mapped subfolder /test/DemoTest and it appears at root level, even it is in the subfolder test. I don't understand this behaviour.
[image: image] https://user-images.githubusercontent.com/10532676/159130099-4b588ff8-a715-4cf4-8fba-46e05fe4a4d2.png
I think this is really a bug of CBB itself, since inside the container evrything is good AND I also can read/write the strange working sub folders.
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1073041340, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5RGH7YHGTRQ2YBR4IDVAX7W5ANCNFSM5P7WKLIQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
CBB is not displaying the standard file tree. It seems to show mount points instead... What do you have if you run
docker exec <container name> mount
?
I don't understand the workflow with file trees, that CBB expect from their users. What do you mean by "mount point"? For my understanding the docker volume path mapping is a mount point too. In the end I expect CBB to handle this mount as a regular folder as a part of the containers file system. This is not working. What piece I am missing here?
The mount
command does not provide any output. Even if I run it directly in the container.
It would be good if you can log a support case with CloudBerry team - per my message previously, they will look at the issue if there is enough demand from users.
I will open a case. Hopefully it helps.
Thank you guys!
I don't understand the workflow with file trees, that CBB expect from their users. What do you mean by "mount point"? For my understanding the docker volume path mapping is a mount point too. In the end I expect CBB to handle this mount as a regular folder as a part of the containers file system. This is not working. What piece I am missing here?
I agree with you, it should work. I don't know why some mount points are not detected by CBB. Maybe it depends on the associated file system...
The mount command does not provide any output. Even if I run it directly in the container.
You can try to run df -h
from the container instead.
I will open a case. Hopefully it helps.
To maximize you chance of having good answers, just don't tell that CBB is running inside a container ;)
You can try to run
df -h
from the container instead.
@jlesage Looks good for me or is anything wrong? The blured part are my volume shares, specified in the docker-compose.yml
.
BUT I MADE SOME PROGRESS! 🤯
I've played around a lot with different settings and permissions because this problem lead to sleepless nights for me. Turns out that the files need to have the exact same owner
as specified for the PUID
and PGID
values. There is also a hint in the README.md
...
When using data volumes (-v flags), permissions issues can occur between the host and the container. For example, the user within the container may not exist on the host. This could prevent the host from properly accessing files and folders on the shared volume.
@justaboveaverage can you confirm that the PUID
and PGID
does match, for the files and docker-compose in your case? (Run id username
) to get the values.
For context: I'm running an Open Media Vault instance as host system and tried some other backups solutions before. Some of them changed the owner
of the directories I want to backup. I wasn't aware of that.
Anyway, it does not work when mouting the directories under the storage folder:
"/srv/dev-disk-by-uuid-[...]/my-data:/storage:ro"
--> /storage
in the CBB file tree will be empty
"/srv/dev-disk-by-uuid-[...]/my-data:/my-data:ro"
-->files show up under /my-data
in the CBB file tree, but not in /storage
I'm feeling like a total noob now. Thanks for your help.
I'm on Unraid.. so where do I run that command from? Within the container? Or from Unraid command line?
On Sun, 3 Apr 2022 at 16:45, jeykodev @.***> wrote:
You can try to run df -h from the container instead.
@jlesage https://github.com/jlesage Looks good for me or is anything wrong? The blured part are my volume shares, specified in the docker-compose.yml.
[image: image] https://user-images.githubusercontent.com/10532676/161414884-46307dbf-eea1-4025-b022-1c2b8690e236.png
BUT I MADE SOME PROGRESS! 🤯
I've played around a lot with different settings and permissions because this problem lead to sleepless nights for me. Turns out that the files need to have the exact same owner as specified for the PUID and PGID values. There is also a hint in the README.md...
When using data volumes (-v flags), permissions issues can occur between the host and the container. For example, the user within the container may not exist on the host. This could prevent the host from properly accessing files and folders on the shared volume.
@justaboveaverage https://github.com/justaboveaverage can you confirm that the PUID and PGID does match, for the files and docker-compose in your case? (Run id username) to get the values.
For context: I'm running an Open Media Vault instance as host system and tried some other backups solutions before. Some of them changed the owner of the directories I want to backup. I wasn't aware of that.
Anyway, it does not work when mouting the directories under the storage folder:
"/srv/dev-disk-by-uuid-[...]/my-data:/storage:ro" --> /storage in the CBB file tree will be empty "/srv/dev-disk-by-uuid-[...]/my-data:/my-data:ro" -->files show up under /my-data in the CBB file tree, but not in /storage
I'm feeling like a total noob now. Thanks for your help.
— Reply to this email directly, view it on GitHub https://github.com/jlesage/docker-cloudberry-backup/issues/29#issuecomment-1086788577, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4KT5SG73227WCADRGHYKLVDE5CBANCNFSM5P7WKLIQ . You are receiving this because you were mentioned.Message ID: @.***>
You should not have any problem with unRAID. Are you using default settings ?
So it has been 6 months and I could never get this going. My storage folder in CloudBerry Backup on Unraid was always not showing.
Today I tested moving my docker image type from XFS to Directory - same thing, no difference with CloudBerry. I then tried the BTRFS image type and BAM - Cloudberry working again, and the storage container is visible.
Now... I remember that I moved from BTRFS to XFS image type because I was having some issues (cannot recall what), so I am reluctant to go back - my question is, why is CloudBerry image only working on the BTRFS docker image type, not on XFS or Directory (I would like to move to Directory for ease of management etc).
THANK YOU!!!
I had the same issue with /
not showing when selecting source in cloudberry. I run 2 instances of CB on separate unraid servers. Worked fine on one but not the other. Both unraid servers are running docker directory. They were both previously using docker image. The one with the issue was running CB prior to making the switch. I've done a clean install but the issue remained. The one that never had the issue worked with just a single path mapping but also worked when I setup a second path.
After setting these 2 mount points on the instance with the issue I was able to see /storage
when selecting source in CB. They both need to be under /storage
otherwise it doesn't work. The instance without the issue doesn't have this restriction.
host /mnt
> container /storage/local
host /mnt/remotes
> container /storage/remotes
TadMSTR was correct for my case. Fresh install and i had to adjust the template container path to /storage/local
Worked for me as well, thanks @TadMSTR
Hi
I've had the CloudBerry container running on Unraid for a year or more. Recently I realised that this container has not been backing up anything for a while because the STORAGE folder has disappeared from the GUI. Pretty sure it was there originally because I had selected some specific folders under STORAGE (in Unraid this is under /mnt/user). Now, even if I remove the container completely, delete the config folders and start fully from scratch, when I go into the Select Source, all I see is the below:
Any suggestions on how to resolve?