Closed paulcalcraft closed 3 days ago
Thanks for the report, we'll take a look!
Just a heads up, we have also been able to replicate this issue. This is running within a Lambda; the error occurs after a few hundred tokens. An example prompt which seems to replicate this for us is: Please send the first 10 paragraphs of Alice's Adventures in Wonderland by Lewis Carroll (which is in the public domain)
.
APIConnectionError: Connection error.
at Function.generate (file:///var/task/node_modules/@anthropic-ai/sdk/error.mjs:32:20)
at Stream.iterator (file:///var/task/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
... 2 lines matching cause stack trace ...
at async MessageStream._createMessage (file:///var/task/node_modules/@anthropic-ai/sdk/lib/MessageStream.mjs:113:26) {
status: undefined,
headers: undefined,
error: undefined,
cause: Error: [object Object]
at castToError (file:///var/task/node_modules/@anthropic-ai/sdk/core.mjs:682:12)
at Function.generate (file:///var/task/node_modules/@anthropic-ai/sdk/error.mjs:32:52)
at Stream.iterator (file:///var/task/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async MessageStream._createMessage (file:///var/task/node_modules/@anthropic-ai/sdk/lib/MessageStream.mjs:113:26)
}
Running into the exact same issue here when running it on Vercel using Vercel AI SDK.
FWIW, my guess is that this is due to Vercel timing out your handler, but I agree the error message being hard to read makes this worse. @RobertCraigie care to ticket?
Thanks! I'm on a Pro plan with Vercel for 5 minute timeouts so I don't think that's actually the case for me
@rattrayalex fwiw, and I mentioned above, we have seen this error in plain old AWS Lambda, and have observed this not to be related to Lambda timeouts. (Just for my own edification, what's the relationship here with @stainless-api?)
Gotcha, that's helpful. We'll try to look into this, but a repro script would be very helpful. Can anyone share one?
what's the relationship here with https://github.com/stainless-api?
I work at Stainless, which Anthropic uses to build their SDKs.
I am seeing this too.
I'm running a NextJS app locally. Just randomly chatting with my app it throws this error maybe every 5 - 10 requests. The app has been working fine with Together AI's API (via the OpenAI SDK) using Llama 3 and 3.1 in the last few months. Since swapping over to Anthropic I'm now seeing this intermittent issue.
This is the output when the error is thrown:
APIConnectionError: Connection error.
at APIError.generate (file:///Users/path/to/app/node_modules/@anthropic-ai/sdk/error.mjs:33:20)
at Stream.iterator (file:///Users/path/to/app/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
... 11 lines matching cause stack trace ...
at async handleRequest (/Users/path/to/app/node_modules/next/dist/server/lib/router-server.js:353:24)
at async requestHandlerImpl (/Users/path/to/app/node_modules/next/dist/server/lib/router-server.js:377:13) {
status: undefined,
headers: undefined,
request_id: undefined,
error: undefined,
cause: Error: [object Object]
at castToError (file:///Users/path/to/app/node_modules/@anthropic-ai/sdk/core.mjs:695:12)
at APIError.generate (file:///Users/path/to/app/node_modules/@anthropic-ai/sdk/error.mjs:33:52)
at Stream.iterator (file:///Users/path/to/app/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async handler (webpack-internal:///(api)/./pages/api/chat.ts:269:30)
at async K (/Users/path/to/app/node_modules/next/dist/compiled/next-server/pages-api.runtime.dev.js:21:2946)
at async U.render (/Users/path/to/app/node_modules/next/dist/compiled/next-server/pages-api.runtime.dev.js:21:3827)
at async DevServer.runApi (/Users/path/to/app/node_modules/next/dist/server/next-server.js:554:9)
at async NextNodeServer.handleCatchallRenderRequest (/Users/path/to/app/node_modules/next/dist/server/next-server.js:266:37)
at async DevServer.handleRequestImpl (/Users/path/to/app/node_modules/next/dist/server/base-server.js:791:17)
at async /Users/path/to/app/node_modules/next/dist/server/dev/next-dev-server.js:331:20
at async Span.traceAsyncFn (/Users/path/to/app/node_modules/next/dist/trace/trace.js:151:20)
at async DevServer.handleRequest (/Users/path/to/app/node_modules/next/dist/server/dev/next-dev-server.js:328:24)
at async invokeRender (/Users/path/to/app/node_modules/next/dist/server/lib/router-server.js:174:21)
at async handleRequest (/Users/path/to/app/node_modules/next/dist/server/lib/router-server.js:353:24)```
Could do something like this to serialize the object as JSON for use as the error message. Not an ideal fix but at least we'd be able to see what the error is.
We're also seeing this issue (using .messages.stream()
) – is this still on the roadmap to be fixed?
@greg84 @jbergs-dsit (or anyone else on this thread) could you please provide a codesandbox or similar which reproduces the error?
It should replicate for you using this repo (see comment above): https://github.com/beginner-corp/claude-begin-demo
Note: you don't need to deploy to Begin to replicate, just run the local sandbox with npm start
.
Thank you @ryanblock, we'll take a look soon!
I have not been able to consistently reproduce this. It happens when the API returns an error to a streaming response. We have seen it during times of instability, where the API was returning 500 or overloaded errors.
Please read the original comment from paulcalcraft, it describes exactly what is happening: Just need to extract some useful detail from errJson before the error is thrown.
EDIT: we're working on a fix for this internally.
@rattrayalex that appears to be a private repo?
~@RobertCraigie can you elaborate a bit on how the error has been fixed?
Is e.g. the Connection Error not occurring anymore or is the error it throws now processable by thecastToError
function?~
EDIT: answered by commit reference
Sorry it looks like it was closed prematurely before this commit was pushed.
This fix was released in v0.27.3
, really sorry for the delay here!
When hitting an error during the async iterator of a
anthropic.messages.create()
, the exception raised and associated error object doesn't have any detail, it just has an e.cause.message set to"[object Object]"
.My example SSE that's occurring during streaming is:
The error SSE is then thrown using APIError.generate here: https://github.com/anthropics/anthropic-sdk-typescript/blob/ad92b0d536508954ee8b6c83e82bca30eefeb298/src/streaming.ts#L95
The errJSON is correctly being passed to generate, but because status isn't set (it's an SSE, not an HTTP response), we use castToError() to raise the APIConnectionError, with no other info: https://github.com/anthropics/anthropic-sdk-typescript/blob/ad92b0d536508954ee8b6c83e82bca30eefeb298/src/error.ts#L52
castToError just returns new Error(errJSON) https://github.com/anthropics/anthropic-sdk-typescript/blob/ad92b0d536508954ee8b6c83e82bca30eefeb298/src/core.ts#L977
But errJSON doesn't have a good toString, so our cause Error object has message "[object Object]" and no other properties. This means you can't handle/inspect the error cause correctly when catching the error during the async iterator.
An example error:
And where I'm catching it:
Would it be possible to correctly format the error so that it's possible to identify at least the error type by inspecting .cause on the APIConnectionError?
Thanks for any help. Also happy to submit a PR if there's agreement on the best way to surface the error detail in the APIConnectionError object.