I Have Tier 2 subscription for Anthropic and this limit
Model__| Requests per Minute | Tokens per Minute | Tokens per Day
Claude 3.5 Sonnet 2024-10-22 | 1,000____| 80,000| 2,500,000
I was generating an app: "an app for a car rental agency, called CarsYes ; airbnb style experience"
I changed
claude-3-5-sonnet-20240620 to the latest model: claude-3-5-sonnet-20241022
**Initially I had about 9$ spent when I start getting error and then it went up to 13$
and the code seems to still bring the cost up even though I reached the limit.
Also I had multiple versions of the code for some files. Latest and some other version could produce GUI.
I would suggest that The code should detect this limit condition to save the state (if not already doing so) and I can stop the app and when next day comes I could have some resume with new quote condition. Basically there is not need to keep retrying hitting Anthropic when daily limit is hit and waste tokens and "possibly keep generating many versions of the generation)
Also as somebody suggested maybe we can have multiple Antropic accounts with different keys to replace when we reach this condition. Yes we know that we are going to spend more money.**
Token usage:
Input | Output
848310 | 152981
These are 2 errors I got:
Something about 5MB
{
asyncretry_error: {
id: 'op:LLM::GEN',
error: RateLimitError: 429 {"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has
exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
at APIError.generate (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/error.mjs:55:20)
at Anthropic.makeStatusError (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/core.mjs:268:25)
at Anthropic.makeRequest (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async inference (file:///J:/cofounder/cofounder/api/utils/anthropic.js:75:20)
at async Object.opLlmGen [as op:LLM::GEN] (file:///J:/cofounder/cofounder/api/system/functions/op/llm.js:66:26)
at async file:///J:/cofounder/cofounder/api/build.js:87:30
at async file:///J:/cofounder/cofounder/api/build.js:84:25
at async file:///J:/cofounder/cofounder/api/node_modules/p-queue/dist/index.js:187:36 {
status: 429,
headers: {
'anthropic-ratelimit-requests-limit': '1000',
'anthropic-ratelimit-requests-remaining': '999',
'anthropic-ratelimit-requests-reset': '2024-11-03T07:50:35Z',
'anthropic-ratelimit-tokens-limit': '2500000',
'anthropic-ratelimit-tokens-remaining': '0',
'anthropic-ratelimit-tokens-reset': '2024-11-04T07:53:58Z',
'cf-cache-status': 'DYNAMIC',
'cf-ray': '8dcac5bdbfabab99-YYZ',
connection: 'keep-alive',
'content-length': '423',
'content-type': 'application/json',
date: 'Sun, 03 Nov 2024 07:50:16 GMT',
'request-id': 'resdfmay',
'retry-after': '884',
server: 'cloudflare',
via: '1.1 google',
'x-robots-tag': 'none',
'x-should-retry': 'true'
},
request_id: 'req_01VFsdfpAoLHmay',
error: {
type: 'error',
error: {
type: 'rate_limit_error',
message: 'Number of request tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales
to discuss your options for a rate limit increase.'
}
}
}
}
}
{
SYSTEM_RUN_ERR: {
err: Error: Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"messages.0.content.1.image.source.base64: image exceeds 5 MB maximum: 6279864 bytes > 5242880 bytes"}}
at .../cofounder/api/build.js:107:17
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async .../cofounder/api/build.js:84:25
at async .../cofounder/api/node_modules/p-queue/dist/index.js:187:36,
id: 'op:LLM::GEN'
}
}
I Have Tier 2 subscription for Anthropic and this limit Model__| Requests per Minute | Tokens per Minute | Tokens per Day Claude 3.5 Sonnet 2024-10-22 | 1,000____| 80,000| 2,500,000
I was generating an app: "an app for a car rental agency, called CarsYes ; airbnb style experience"
I changed claude-3-5-sonnet-20240620 to the latest model: claude-3-5-sonnet-20241022
**Initially I had about 9$ spent when I start getting error and then it went up to 13$ and the code seems to still bring the cost up even though I reached the limit. Also I had multiple versions of the code for some files. Latest and some other version could produce GUI.
I would suggest that The code should detect this limit condition to save the state (if not already doing so) and I can stop the app and when next day comes I could have some resume with new quote condition. Basically there is not need to keep retrying hitting Anthropic when daily limit is hit and waste tokens and "possibly keep generating many versions of the generation)
Also as somebody suggested maybe we can have multiple Antropic accounts with different keys to replace when we reach this condition. Yes we know that we are going to spend more money.**
Token usage: Input | Output 848310 | 152981
These are 2 errors I got:
Something about 5MB
{ asyncretry_error: { id: 'op:LLM::GEN', error: RateLimitError: 429 {"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}} at APIError.generate (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/error.mjs:55:20) at Anthropic.makeStatusError (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/core.mjs:268:25) at Anthropic.makeRequest (file:///J:/cofounder/cofounder/api/node_modules/@anthropic-ai/sdk/core.mjs:311:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async inference (file:///J:/cofounder/cofounder/api/utils/anthropic.js:75:20) at async Object.opLlmGen [as op:LLM::GEN] (file:///J:/cofounder/cofounder/api/system/functions/op/llm.js:66:26) at async file:///J:/cofounder/cofounder/api/build.js:87:30 at async file:///J:/cofounder/cofounder/api/build.js:84:25 at async file:///J:/cofounder/cofounder/api/node_modules/p-queue/dist/index.js:187:36 { status: 429, headers: { 'anthropic-ratelimit-requests-limit': '1000', 'anthropic-ratelimit-requests-remaining': '999', 'anthropic-ratelimit-requests-reset': '2024-11-03T07:50:35Z', 'anthropic-ratelimit-tokens-limit': '2500000', 'anthropic-ratelimit-tokens-remaining': '0', 'anthropic-ratelimit-tokens-reset': '2024-11-04T07:53:58Z', 'cf-cache-status': 'DYNAMIC', 'cf-ray': '8dcac5bdbfabab99-YYZ', connection: 'keep-alive', 'content-length': '423', 'content-type': 'application/json', date: 'Sun, 03 Nov 2024 07:50:16 GMT', 'request-id': 'resdfmay', 'retry-after': '884', server: 'cloudflare', via: '1.1 google', 'x-robots-tag': 'none', 'x-should-retry': 'true' }, request_id: 'req_01VFsdfpAoLHmay', error: { type: 'error', error: { type: 'rate_limit_error', message: 'Number of request tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase.' } } } } }
{ SYSTEM_RUN_ERR: { err: Error: Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"messages.0.content.1.image.source.base64: image exceeds 5 MB maximum: 6279864 bytes > 5242880 bytes"}} at .../cofounder/api/build.js:107:17 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async .../cofounder/api/build.js:84:25 at async .../cofounder/api/node_modules/p-queue/dist/index.js:187:36, id: 'op:LLM::GEN' } }