Open mambocab opened 1 year ago
I was able to fix this issue locally by cowboying the following into the compiled plugin:
diff --git a/.obsidian/plugins/readwise-mirror/main.js b/.obsidian/plugins/readwise-mirror/main.js
index 55aa926..a368100 100644
--- a/.obsidian/plugins/readwise-mirror/main.js
+++ b/.obsidian/plugins/readwise-mirror/main.js
@@ -8475,7 +8475,12 @@ class ReadwiseApi {
data = yield response.json();
if (response.status === 429) {
// Error handling for rate limit throttling
- const rateLimitedDelayTime = parseInt(response.headers.get('Retry-After')) * 1000 + 1000;
+ var rateLimitedDelayTime = parseInt(response.headers.get('Retry-After')) * 1000 + 1000;
+ if (isNaN(rateLimitedDelayTime)) {
+ console.info(`rateLimitedDelayTime is NaN (derived from ${response.headers.get('Retry-After')})`);
+ rateLimitedDelayTime = 60 * 1000 + 1000 // Default to 61 seconds.
+ }
+
console.warn(`Readwise: API Rate Limited, waiting to retry for ${rateLimitedDelayTime}`);
this.notify.setStatusBarText(`Readwise: API Rate Limited, waiting ${rateLimitedDelayTime}`);
yield new Promise((_) => setTimeout(_, rateLimitedDelayTime));
Feel free to use this patch with or without attribution, and with whatever alterations you like (though I doubt it's the right approach).
When the rate limiting occurs, it seems to happen because response.headers.get('Retry-After')
returns null
:
Hi @mambocab , thank you for looking into this issue. I have noticed intermittent Readwise throttling issues as well; not sure when these started happening - maybe there was an API change somewhere down the line as I wrote the original syncing code when first releasing this plugin and it hasn't been updated since.
Would you feel comfortable submitting a PR for this issue since you seem to have found the solution? I'd be happy to attribute the fix to you in the release notes and incorporate it into the next release.
I just force synced my entire Readwise library again (~12k highlights) and it didn't encounter any throttling errors at all. Weird. Yet I have seen it happen other times when doing the same thing 🤷🏼
I suppose the best proper long term fix would be to just get the actual time from the Retry-After
response (padding it with a little extra time for safety), but apparently CORS issues makes accessing the Retry-After
header tricky. @johannrichard had submitted a fix for this in #20 , but after implementing it I encountered issues and needed to revert the changes, but that solution is definitely on the right track.
Either way, your proposed solution is a good idea; if an API issue causes a NaN
it shouldn't break the throttling implementation.
I'm not sure why this is happening, but rate-limiting isn't working properly for me.
API Rate Limited, waiting to retry for NaN
Looking at the headers in the requests that come back, I see valid
Retry-Header
values (e.g.45
or56
), which is surprising to me. Having looked at the code, I expected the issue to be a missing header.I think the expected behavior here is a more robust failure mode.