kiwiz / gkeepapi

An unofficial client for the Google Keep API.
MIT License
1.52k stars 112 forks source link

RESOURCE_EXHAUSTED exception from Google; can I just sleep between requests? #126

Closed jp5282 closed 2 years ago

jp5282 commented 2 years ago

I'm trying to bootstrap gkeepapi for first time against my very-old Keep account. I have 14k+ entries in Keep (checked in Google Takeout). I wrote a simple script with login followed by sync. First run I recall was successful. Second and beyond errors out with RESOURCE_EXHAUSTED. I'm fine with long-running times. Can I just add equivalent of sleep(1) between each request? I looked at the code myself but could not find the equivalent of for loop over requests. If yes, what line of code would I modify in my local gkeepapi?

Trace: File "/usr/local/lib/python3.9/site-packages/gkeepapi/__init__.py", line 241, in send raise exception.APIException(error["code"], error) gkeepapi.exception.APIException: {'code': 429, 'message': "Quota exceeded for quota metric 'Sync requests' and limit 'Sync requests per minute per user' of service 'notes-pa.googleapis.com' for consumer 'project_number:192748556389'.", 'errors': [{'message': "Quota exceeded for quota metric 'Sync requests' and limit 'Sync requests per minute per user' of service 'notes-pa.googleapis.com' for consumer 'project_number:192748556389'.", 'domain': 'global', 'reason': 'rateLimitExceeded'}], 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'RATE_LIMIT_EXCEEDED', 'domain': 'googleapis.com', 'metadata': {'service': 'notes-pa.googleapis.com', 'quota_metric': 'notes-pa.googleapis.com/sync_requests', 'quota_limit_value': '150', 'quota_location': 'global', 'consumer': 'projects/192748556389', 'quota_limit': 'SyncsPerMinutePerProjectPerUser'}}, {'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Request a higher quota limit.', 'url': 'https://cloud.google.com/docs/quota#requesting_higher_quota'}]}]}

kiwiz commented 2 years ago

The recommendation is to cache notes locally so only deltas need to be synced: https://github.com/kiwiz/gkeepapi/blob/master/examples/resume.py

But, to answer the question: https://github.com/kiwiz/gkeepapi/blob/master/gkeepapi/__init__.py#L1049

jp5282 commented 2 years ago

The recommendation is to cache notes locally so only deltas need to be synced: https://github.com/kiwiz/gkeepapi/blob/master/examples/resume.py

Wow thanks for the fast reply @kiwiz! Can you tell me more from this suggestion? I believe you're suggesting that I should use the gkeepapi dump and restore feature right? If yes, then I am implementing this code, and it is successfully tested on an account with smaller Keep note set. But using dump/restore pattern, don't I still need a single successful execution to achieve the first dump and subsequent restores? I believe this is why I need sleep(1) to avoid RESOURCE_EXHAUSTED even on just first run. Tell me if I'm missing something in your pointer suggestion.

jp5282 commented 2 years ago

Ugh wrong button :P

kiwiz commented 2 years ago

Yup, that's correct. You can put a sleep(1) call within that while loop and it should hopefully throttle the requests enough to succeed.

jp5282 commented 2 years ago

Oh my gosh - that worked ! Thank you @kiwiz !!!

So for posterity (maybe next person to have this problem), how I/we solved was time.sleep(1) inside while loop like @kiwiz suggested. pip3 uninstall gkeepapi. Then clone gkeepapi from git to local src directoy. Then pip install -e ./ to install modified version of gkeepapi (after inserting sleep call). Then ran the test script and it worked! Thanks @kiwiz !!!

jp5282 commented 2 years ago

If anyone needs convincing to use the token/resume and dump/restore patterns for gkeepapi, I did measure execution time in my script. First run, which pulled all from server to dump local copy to disk, took 241.97 seconds. Subsequent runs, which loaded dump from disk and only handled deltas between server and client, took 11.27 seconds. That's >10x improvement. Thanks again @kiwiz !!!