The Auto Backup folder seems to be magical because it can contain more than 1000 pictures. This is clearly displayed when I run the script with -v. (I have around 16000 images in there.)
The script as it stands can only download the "first" 1000 of the pictures. I played around in the code and introduced pagination for the retrieval of the picture metadata with the optional parameters limit and start_index of gd_client.GetFeed but I am still unable to download more than 10.000 photos.
If I go above 10.000 with the start_index I get the following error from Google: (500, 'Internal Server Error', 'Deprecated offset is too large for a stream ID query. Please switch to using resume tokens.')
Any ideas how this limitation can be overcome or what resume tokens could be?
Sorry for long delay. I'm afraid I don't know, and am going to close the issue as (although it is an interesting question) isn't really an issue against the code
The Auto Backup folder seems to be magical because it can contain more than 1000 pictures. This is clearly displayed when I run the script with -v. (I have around 16000 images in there.)
The script as it stands can only download the "first" 1000 of the pictures. I played around in the code and introduced pagination for the retrieval of the picture metadata with the optional parameters limit and start_index of gd_client.GetFeed but I am still unable to download more than 10.000 photos.
If I go above 10.000 with the start_index I get the following error from Google: (500, 'Internal Server Error', 'Deprecated offset is too large for a stream ID query. Please switch to using resume tokens.')
Any ideas how this limitation can be overcome or what resume tokens could be?