Open grahamseamans opened 1 week ago
I still don't have o1 and o1-mini access, so I can't verify. Can you check if you see something in the devtools console? Or can you possibly send me your API key in private?
Thanks for sending your key, @grahamseamans.
I see the o1
integration needs some work. For starters, streaming is not supported, so this has to be disabled (will probably be supported on GA release though?).
Secondly, the standard token maximum (max_completion_tokens
) of 512
is way too low, even a simple question uses a lot more tokens on o1
due to the reasoning.
Finally, we should should a warning message when finish_reason
is different than stop
. In case of exceeding the max tokens it's length
, but we do not show this anywhere and only show the empty return message, hence the confusion.
Great
-not stream -very high token limit makes o1 and o1 mini work for me.
I'll keep this open for now, since I want to increase the default tokens and add a warning for the length stop reason.
Error: Failed to fetch still happens if it takes a long time to respond(I am using o1-mini). Any clues why it happens.
All other models I've tried worked fine.
They send off, and I know I have access, but maybe there's something about how they're formatted that's incompatible with the current renderer?
If someone has a fix already or if it works for them please lmk