boredazfcuk / docker-icloudpd

An Alpine Linux container for the iCloud Photos Downloader command line utility
1.58k stars 149 forks source link

WARNING: failsafe file not found #529

Closed WSTDESIGN closed 1 month ago

WSTDESIGN commented 3 months ago

I never get to download because it says the failsafe file does not exist. Problem is the .mounted file is there just is not being seen.

Can anyone help me get this to work? Would be forever grateful.

Thanks

frebens commented 3 months ago

I've been at this for the last 3 days solid. I'm also unable to have the .mounted file picked up. I've tried various iterations of folders combinations/permissions, but no luck Does anyone have a screenshot of the variables that is set related to file paths?

I've set 'download_path=/volume1/homes/freben/iCloud' as per recommendation and moved the .mounted file there, but the script still get's stuck on WARNING Failsafe...

lonevvolf commented 2 months ago

I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.

frebens commented 2 months ago

Thanks for the response. I'm about to throw in the towel as I just can't get it right.

Here's my config:

[image: Screenshot 2024-04-09 at 14.35.33.png] Should I be adding the .mounted file in the volume settings?

[image: Screenshot 2024-04-09 at 14.34.41.png]

On Tue, 9 Apr 2024 at 06:42, lonevvolf @.***> wrote:

I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.

— Reply to this email directly, view it on GitHub https://github.com/boredazfcuk/docker-icloudpd/issues/529#issuecomment-2044187795, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUQSD5XQN5RP47JRBXKA2V3Y4N5S7AVCNFSM6AAAAABFFIDL2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBUGE4DONZZGU . You are receiving this because you commented.Message ID: @.***>

-- Freben Serfontein +44 7813 009748

frebens commented 2 months ago

Screenshot 2024-04-09 at 14 34 41 Screenshot 2024-04-09 at 14 35 33

lonevvolf commented 2 months ago

A few notes. 1) You can remove synology_photos_app_fix, it doesn't work 2) Is the user_id the same as your synology user? 3) group should be a name like "users", group_id should be the id 4) Try not using a home folder for the download_path - I would suggest using a base shared folder

frebens commented 2 months ago

Thanks, I have done as suggested. I've even gone so far as to remove everything I've done so far and started from scratch. Here is the log it produced after I've done the 2FA.

Screenshot 2024-04-10 at 12 55 37 Screenshot 2024-04-10 at 12 56 01

Folder permissions set to allow user freben

Screenshot 2024-04-10 at 12 57 24

Folder with .mounted file as per the config

Screenshot 2024-04-10 at 12 59 05

Config of the container. Note I cannot add anything related to group or group_id as the process identifies it is already there and exits:

Screenshot 2024-04-10 at 13 09 28

Thanks for your help!

petercockroach commented 2 months ago

Following this as well. I was able to work around this issue by running sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted which worked successfully. I believe this proves that the container was successfully able to create this file with its given permissions.

Once my photos started downloading, I could see all the file names in the log, but nothing was actually writing to the disk!

Edit: I threw the kitchen sink at it and still no luck. FYI, I'm also on Synology. I've tried setting the user to my local user/ID, docker's user/ID, toggling force_gid on and off...no idea where to go next with this.

Edit2: I went into Container Manager and opened a new terminal with ash. I was able to create a password for my user using passwd user then login user. From there, I could cd to /home/user1/iCloud and create and destroy files with no permission problems. I'm really beginning to suspect there's a real bug here. Happy to provide more info if needed.

frebens commented 2 months ago

This is exactly what I thought the issue was. I used the command you suggested and it seemed to have created the file, however I can't see it. I assume it is hidden. However I no longer get the waiting for failsafe file --- whooohooo!!!

It logged into the icloud account and found the photos. However now I'm waiting to see if it actually downloads them. I specifically used an iCloud account with like 50 images to ensure it works before I download the 2TB library. So waiting for the 50 images to come through first. Once they're there I'll amend and start the big one...

Thanks for your help so far! Much appreciated.

On Thu, 11 Apr 2024 at 07:22, petercockroach @.***> wrote:

Following this as well. I was able to work around this issue by running sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted which worked successfully. I believe this proves that the container was successfully able to create this file with its given permissions. Now I'm just waiting to see if my photos actually start downloading, but I spent a lot of time trying to figure this one out

— Reply to this email directly, view it on GitHub https://github.com/boredazfcuk/docker-icloudpd/issues/529#issuecomment-2048998961, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUQSD5VEXGTI4IKPXWZLBP3Y4YTZLAVCNFSM6AAAAABFFIDL2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYHE4TQOJWGE . You are receiving this because you commented.Message ID: @.***>

-- Freben Serfontein +44 7813 009748

boredazfcuk commented 2 months ago

If you create the failsafe file using sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted or something similar, then you are creating that file inside the container. This defeats the purpose of the failsafe mechanism.

It's likely the photos will be downloaded inside the container, rather than on a volume, so all photos will be deleted whenever the container is re-created/upgraded.

petercockroach commented 2 months ago

@boredazfcuk yeah, I totally agree and understand this but I did this merely as a debugging exercise to see if it made any progress. The peculiar part is that no assets are writing to the disk even within the container so there's some sort of write error going on. Whether it's a permissions problem or something else is what I'm not clear on.

Edit: I'm diving into the code now. I see the check for .mounted here, but I don't see where the .mounted file is being created at the ${download_path}

Edit 2: I tried to create a completely new instance locally using Docker my Mac and - would you know it - I ran into the exact same issue!

@frebens whwn you create a file with a . preceding it, it is a hidden file. You should still be able to see it with ls or by turning on your option to view hidden files within your GUI

boredazfcuk commented 2 months ago

The script doesn't create the .mounted file. It has to be a manual process performed outside of the container.

It's basically a marker so you can say "this is the volume I want my photos to appear in".

When the container launches, if it can't see the .mounted file, it knows that the volume isn't attached to the container correctly. This way it avoids downloading all the photos inside the container and losing them when the container is upgraded.

Also, the Docker container files are often contained in the root partition. If you don't map a volume correctly, then download 2TB to the container, chances are that you will fill the root partition of your server.

The only other way I could test for this would be to mount the Docker sock file inside the container and start querying that. Seems a bit overly complicated and also gives permission to the script to interact with the Docker system.

I feel that's a major security concern for users as it would mean I could start querying all sorts of stuff from within the container, with a few modifications to the script.

Creating the mounted marker file is a simpler and more secure way of telling the container it's looking at the correct place.

petercockroach commented 2 months ago

Thanks for the reply. Pardon my ignorance but could you elaborate a bit about when the .mounted file is created? I now understand the script doesn't create it, but at what point during the process is it created and by what function?

Edit: okkkkk...so maybe I figured this whole thing out. It sounds like the .mounted file needs to be created...manually by the user? If this is the case, I would suggest it be added to the README or better yet, include it as part of the Initialization process - even if it's just a warning to the user to create it manually along with the command to do so.

Once my photos started downloading, I could see all the file names in the log, but nothing was actually writing to the disk!

I realized that this was just part of the process where the file names were logging to the console but not actually downloading. After leaving it for some time, the download has now started

northportio commented 2 months ago

What's the status on this?

frebens commented 2 months ago

What's the status on this?

I was unfortunately unable to make this work on my Synology NAS. I am hoping someone much more clever than me can solve it.

WSTDESIGN commented 2 months ago

I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.

Can you please elaborate on this? Maybe provide screenshots or something so I'm clear on this. I'm fairly certain all the permissions are correct.

WSTDESIGN commented 2 months ago

The script doesn't create the .mounted file. It has to be a manual process performed outside of the container.

It's basically a marker so you can say "this is the volume I want my photos to appear in".

When the container launches, if it can't see the .mounted file, it knows that the volume isn't attached to the container correctly. This way it avoids downloading all the photos inside the container and losing them when the container is upgraded.

Also, the Docker container files are often contained in the root partition. If you don't map a volume correctly, then download 2TB to the container, chances are that you will fill the root partition of your server.

The only other way I could test for this would be to mount the Docker sock file inside the container and start querying that. Seems a bit overly complicated and also gives permission to the script to interact with the Docker system.

I feel that's a major security concern for users as it would mean I could start querying all sorts of stuff from within the container, with a few modifications to the script.

Creating the mounted marker file is a simpler and more secure way of telling the container it's looking at the correct place.

Can you please elaborate on mapping the volume. I'm not clear on this. Every time I try to add a folder the second field is looking for another folder? Another file? No matter what I put in the second field the container settings will not save and I get a form error. I also am finding when I go to map the volume I don't get the option of selecting any shared folders... I don't have clue what is going here.. very frustrating.

boredazfcuk commented 2 months ago

I would suggest it be added to the README or better yet, include it as part of the Initialization process - even if it's just a warning to the user to create it manually along with the command to do so.

It's already part of the configuration guide. It will also tell you about it if running sync-icloud.sh --help

boredazfcuk commented 2 months ago

Can you please elaborate on mapping the volume. I'm not clear on this.

When working with containers, if you delete the container, or upgrade it, you will lose everything inside it. For this reason, you need to map external volumes to directories inside the container which act as persistent storage. The container needs two of these persistent volumes. One to store the /config data and another for your download location, which would default to /home/user/iCloud if the download_path variable is not configured.

Every time I try to add a folder the second field is looking for another folder? Another file? No matter what I put in the second field the container settings will not save and I get a form error. I also am finding when I go to map the volume I don't get the option of selecting any shared folders... I don't have clue what is going here.. very frustrating.

I'm not sure what any of this means. It sounds like a problem with the container managing system on the device you're using to be honest. That's not something I'm familiar with as I don't use any UnRAID, TrueNAS or Synology type NAS devices.

WSTDESIGN commented 1 month ago

Okay.. I'm on synology. I've attached a screen shot of the fields in the container manager that frebens had. He added /config to the second field. So I would map the folder docker/icloudpd then /config for the config data and leave the download path to default. I understand now.

IMG_4240

My other issue is making the .mounted file still. I read that you need to make the file outside of the container by the user with full permissions. I'm not exactly sure how to do that. I've tried two different ways and none of them have worked.

I tried ssh in as the user went to the folder a did touch .mounted. It creates the file just fine but still getting the error.

i tried using a text editor and creating a text doc .mounted.txt and removed the .txt and placed in the folder that didn't work.

I'm just not clear how to create the file outside of the container. Could you please help me in understanding that part? Not sure what I'm not doing...

thanks

boredazfcuk commented 1 month ago

I'm not familiar with the Synology Container Manager, but try creating two volumes and attach them to the container. One called "config" and attach it to "/config" and a second one, called "photos" and mapped to "/photos". Then set the "download_path" variable to "/photos"

Then, on the NAS, find the location that the volume lives using whatever file manager is on there, and create a file called ".mounted" in it.

I don't have a Synology so I don't really know what to advise beyond that.

northportio commented 1 month ago

I did that but permissions still seemed to be an issue. I fixed it by getting to the /bin/sh shell of the container and running these commands:

# Change ownership of the directory to user
sudo chown -R user /path/to/directory

# Grant full access (read, write, execute) to user Bob
sudo chmod -R u+rwx /path/to/directory
boredazfcuk commented 1 month ago

The container does pretty much both of those on every launch as part of its initialisation. Just need to set the directory_permissions and file_permissions variables with the permissions you want.

I have mine set to 750 for directories and 640 for files, which is the default. What you've done there is the equivalent of setting both directory and file permissions to 777. The would make sense for directories, as they need to be executable to be able to move through them. By setting them to 777, you're allowing everyone with access to the system to be able to browse the folders.

Setting file permissions to 777 is a little overkill though. They're photos, not programs that need to be executed, so should really be set to 666 tops. This would allow read/write access for everyone who has access to the system, but doesn't set an unnecessary executable bit.