teslamotors / vehicle-command

Apache License 2.0
435 stars 101 forks source link

API returns "Too many requests" (429) #255

Closed Urkman closed 4 months ago

Urkman commented 4 months ago

Since about two hours the vehicle_data endpoint returns "Too many requests" for my car :( The response is: "Retry in 37829 seconds"

What? the next try in over 10 hours?

Is this a bug? I currently get data every 60 seconds and this was working two hours ago...

There are no announcements on the Tesla developer site: https://developer.tesla.com/docs/fleet-api#announcements-amp-api-changelog

Urkman commented 4 months ago

So, it looks like Tesla activated the rate limits without any notice :(

rileymd88 commented 4 months ago

The rate limits have been published for a while but they've recently just started enforcing them. According to some other GitHub comments new plans/tiers should hopefully be available in Q3.

On Fri, Jun 7, 2024 at 16:07 Stefan Sturm @.***> wrote:

So, it looks like Tesla activated the rate limits without any notice :(

— Reply to this email directly, view it on GitHub https://github.com/teslamotors/vehicle-command/issues/255#issuecomment-2154921690, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABRFLPU3VPYE454HP3QGQELZGG5ATAVCNFSM6AAAAABI6VY3HSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJUHEZDCNRZGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

andig commented 4 months ago

Problem with the rate limit- if they have indeed been activated- is that there's no "other" plan than just Discovery.

Urkman commented 4 months ago

And one big problem here is, that the vehicle_data endpoint still does not return cached data :(

Docu: Makes a live call to the vehicle. This may return cached data if the vehicle is offline.

Urkman commented 4 months ago

So, that's the data we can work with...

Bildschirmfoto 2024-06-07 um 20 07 59
Urkman commented 4 months ago

And that's not even true... When I limit is fine and I get an answer, I don't get these Headers... Joke?

Urkman commented 4 months ago

@sethterashima Can you say something about this?

iainwhyte commented 4 months ago

So, that's the data we can work with... Bildschirmfoto 2024-06-07 um 20 07 59

Is there a way to see the limits with the Tesla-control reference app, or only in the headers of a direct call ?

Urkman commented 4 months ago

When you get the data from every call, then you can work with them. But as get them only when it is already to late, this doesn't help you in any case :(

haroldboom commented 4 months ago

Why enforce an API rate limit when there is no option for people to pay for more calls???

I get Tesla is trying to save money but why not charge for more calls then? Dont enforce a rate limit on customers when there is no other option to get more calls.

Also the 24 hour ban is way to harsh, it should be 300 seconds before another call can be made.

The most annoying thing for myself and others is why do we need to do cloud based api calls anyway? I have a Tesla wall charger 3, let me control the charging through that like every other manufacturer. I specifically purchased a Tesla after researching which brand is best for tinkering with open source as I wanted to use HA, Teslamate, EVCC etc and it seems like every week we are chasing our tail trying to debug an API change (always with no notice).

Urkman commented 4 months ago

Does anybody know the exact time, when the rate limit count is reset? So I could calculate the best time between requests, as the headers are still not send :(

iainwhyte commented 4 months ago

Why enforce an API rate limit when there is no option for people to pay for more calls???

I get Tesla is trying to save money but why not charge for more calls then? Dont enforce a rate limit on customers when there is no other option to get more calls.

Also the 24 hour ban is way to harsh, it should be 300 seconds before another call can be made.

The most annoying thing for myself and others is why do we need to do cloud based api calls anyway? I have a Tesla wall charger 3, let me control the charging through that like every other manufacturer. I specifically purchased a Tesla after researching which brand is best for tinkering with open source as I wanted to use HA, Teslamate, EVCC etc and it seems like every week we are chasing our tail trying to debug an API change (always with no notice).

While I don't condone the very ordinary way this was turned on without warning, and no documentation, no communication from Tesla, I do get why - imagine across 2-3M vehicles the number of API calls for commands being made will be enormous. They need it to be efficient, and one way to do that is to limit the API calls. 50 a day seems paltry, so 5 per hour or something, and enforced hourly seems more sane.

That said, I was not being very efficient with my calls. If I wanted to turn on charging to 80% at 32 amps, I would send a wake, then a start charging, then a set limit then a set amps - 4 calls, often 2 or 3 of them return with an 'already set' which I just didn't bother about. Now, I use a call to the current status (not counted in the command API limit), and only send what's needed. Don't wake if already awake, don't set limit if limit is already at 80, don't change amps etc unless a change is needed. The result is my overnight charging needs now fit in the 50 limit without too much hassle.

That leaves solar charging. It's winter here, so not really in use, but pulling logs from a solar adjusted charge session from a few months back is hundreds of adjustments in ~4 hours.

I've gone to the trouble of delving into the BLE control. Will adjust my tools to prefer that, and use cloud as a backup. That was the limit is not a problem with the added bonus of on-site commands that don't require internet at all.

haroldboom commented 4 months ago

Why enforce an API rate limit when there is no option for people to pay for more calls??? I get Tesla is trying to save money but why not charge for more calls then? Dont enforce a rate limit on customers when there is no other option to get more calls. Also the 24 hour ban is way to harsh, it should be 300 seconds before another call can be made. The most annoying thing for myself and others is why do we need to do cloud based api calls anyway? I have a Tesla wall charger 3, let me control the charging through that like every other manufacturer. I specifically purchased a Tesla after researching which brand is best for tinkering with open source as I wanted to use HA, Teslamate, EVCC etc and it seems like every week we are chasing our tail trying to debug an API change (always with no notice).

While I don't condone the very ordinary way this was turned on without warning, and no documentation, no communication from Tesla, I do get why - imagine across 2-3M vehicles the number of API calls for commands being made will be enormous. They need it to be efficient, and one way to do that is to limit the API calls. 50 a day seems paltry, so 5 per hour or something, and enforced hourly seems more sane.

That said, I was not being very efficient with my calls. If I wanted to turn on charging to 80% at 32 amps, I would send a wake, then a start charging, then a set limit then a set amps - 4 calls, often 2 or 3 of them return with an 'already set' which I just didn't bother about. Now, I use a call to the current status (not counted in the command API limit), and only send what's needed. Don't wake if already awake, don't set limit if limit is already at 80, don't change amps etc unless a change is needed. The result is my overnight charging needs now fit in the 50 limit without too much hassle.

That leaves solar charging. It's winter here, so not really in use, but pulling logs from a solar adjusted charge session from a few months back is hundreds of adjustments in ~4 hours.

I've gone to the trouble of delving into the BLE control. Will adjust my tools to prefer that, and use cloud as a backup. That was the limit is not a problem with the added bonus of on-site commands that don't require internet at all.

Yeah I agree there were people like myself running 30 second api calls to adjust charging however, I am happy to pay to do this and we weren’t given the option which is my major gripe. I had to spend money on a new pi, a Bluetooth dongle and about 2 days trying to get them to all work. That money could have gone to an api bank similar to ChatGPT where you pay per call.

abreingan commented 4 months ago

Now, I use a call to the current status (not counted in the command API limit) - iainwhyte. I have been using TeslaMate to retrieve data and the Fleet API (via the home assistant Tesla Custom Integration - alandtse/tesla) for commands to allow solar charging. This worked well, but TeslaMate seemed to be rate limited for a while recently so that may stop working shortly. I was therefore interested to hear iainwhyte saying status calls don't count towards the 50. Does anyone know if that call is the same one used to Poll in the Tesla Custom Integration for Home Assistant as there was some mention that setting the polling too often was tipping the command count.

iainwhyte commented 4 months ago

Now, I use a call to the current status (not counted in the command API limit) - iainwhyte. I have been using TeslaMate to retrieve data and the Fleet API (via the home assistant Tesla Custom Integration - alandtse/tesla) for commands to allow solar charging. This worked well, but TeslaMate seemed to be rate limited for a while recently so that may stop working shortly. I was therefore interested to hear iainwhyte saying status calls don't count towards the 50. Does anyone know if that call is the same one used to Poll in the Tesla Custom Integration for Home Assistant as there was some mention that setting the polling too often was tipping the command count.

image

The data calls are also rate limited, but have a higher amount (200 vs 50).

https://developer.tesla.com/docs/fleet-api#api-best-practices

abreingan commented 4 months ago

Thanks Ian.

sethterashima commented 4 months ago

This scope of this issue tracker is limited to the Golang code in this repository. Please reach out to the Fleet API support team: https://developer.tesla.com/docs/fleet-api#help-and-support.