Open baptisteArno opened 1 month ago
Are you open to contrib for this? Do you have an implementation in mind?
So after debugging it, I figured it sometimes failed because my prompt was not explicitely saying "Generate a JSON representation".
I wonder if ai
should by default add that bit to the provided prompt.
Depending on how it's configured, the JSON information should be injected automatically. You can use fetch overrides to get details for now: https://github.com/vercel/ai/blob/main/examples/ai-core/src/generate-text/openai-custom-fetch.ts
I second this issue. I find debugging generateObject
quite opaque and results not stable - a full rawResponse is needed.
There was a bug around temperature settings during object generation that has been fixed with 3.2.0
: https://github.com/vercel/ai/pull/2012
The temp for object generation with tool calls is now always 0
(was undefined
before). Results with e.g. OpenAI should be better now.
That's good to know. Can we still expect a full rawResponse though ?
Feature Description
I see that it also returns
rawResponse
but that only containsheaders
prop which is not useful for debugging what the model respondedUse Case
No response
Additional context
No response