zmb3 / spotify

A Go wrapper for the Spotify Web API
Apache License 2.0
1.4k stars 295 forks source link

[Bug/Suggestion] Better Rate Limit Handling #263

Open devonzara opened 4 months ago

devonzara commented 4 months ago

Situation During the early stages of development, before getting around to implementing caching & API throttling, I inadvertently triggered a re-render loop on the frontend. This caused an excessive number of requests to the backend, which in turn spammed the Spotify API.

Consequently, my app was suspended for approximately 24 hours and I began receiving the following responses:

GET /v1/artists/5zixe6AbgXPqt4c1uSl94L HTTP/1.1
Content-Type: application/json
Authorization: Bearer ████████████████████████████████████████████████████████████████████████████████████████████████████████████
User-Agent: PostmanRuntime/7.37.3
Accept: */*
Postman-Token: ███████████████████████████████████
Host: api.spotify.com
Accept-Encoding: gzip, deflate, br
Connection: keep-alive

HTTP/1.1 429 Too Many Requests
cache-control: private, max-age=0
retry-after: 74523
access-control-allow-origin: *
access-control-allow-headers: Accept, App-Platform, Authorization, Content-Type, Origin, Retry-After, Spotify-App-Version, X-Cloud-Trace-Context, client-token, content-access-token
access-control-allow-methods: GET, POST, OPTIONS, PUT, DELETE, PATCH
access-control-allow-credentials: true
access-control-max-age: 604800
content-encoding: gzip
strict-transport-security: max-age=31536000
x-content-type-options: nosniff
date: Sun, 30 Jun 2024 02:39:13 GMT
server: envoy
Via: HTTP/2 edgeproxy, 1.1 google
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Transfer-Encoding: chunked

Too many requests

Notice that the retry-after header is 74623 and the response body is not in JSON format as expected. This causes zmb3/spotify to throw a "spotify: couldn't decode error: (17) [Too many requests]" error.

Suggestions

  1. Handle the plain text response body as an edge case.
  2. Implement a maxWait (also suggested in #241) or similar mechanism for retries.
  3. Provide access to the raw response or retry-after header value so the application can decide how to handle the situation.
ATroschke commented 1 month ago

You can (and very likely should) use contexts to achieve the desired behaviour.

What I did was something like this:

timeoutCtx, cancel := context.WithDeadline(parentCtx, time.Now().Add(30*time.Second))
artist, err := client.GetArtist(timeoutCtx, spotify.ID(event.SpotifyID))
cancel()

Then you can handle the error, which - when using withRetry - should be the latest error the client received for this call.

If you're very fancy you can then block any request to the Spotify API with the client id/secret combo. In my case I have a global service orchestrating things. I forked and played a bit with this package, exposing the retry-after header, and set a value in redis that marks the Spotify API as "invalid" until a specific RFC339 timestamp.

There's sadly no great way around this if you want up-to-date Data in a cached fashion.

In my case, I am requesting Spotify's API once every 24 hours for artists my users have interest in to cache them and do some checks (does the artist have a new release, do I need to update their profile picture ect), and once a Month for artists without any interest from my userbase.

I still get this issue with just ~10 active test users from time to time due to how much Spotify limits in-development Apps.

Pineapple217 commented 1 week ago

The method with context is not ideal, the program will be blocking for 30 seconds when it could already exit (return error). You can know the time of the Retry-After header, you don't have to wait. It would not be that hard to implement a max retry time for the client. I will have a go at it, if this is a feature the maintainers would allow.