Open Penagwin opened 1 month ago
The idea is interesting, and the new branch (aka what will be Big AGI 2) has more support for it. The hard part is where to toggle this (on/off) in the UI. On a per-request, per-chat, per-model basis? And should the json just be returned as a string to the UI, or decoded somehow, or presented as a code block?
Why I like to use big-agi as a sandbox and would like to use it to try different models in json_mode
Description A bunch of providers support openai's
response_format: { type: "json_object" }
to use guidance/logit biasing so it (almost) always valid json. This helps a lot for trying out prompts/modelsIt would be best to know which models/providers support json, I'm not sure if the api's expose that. I'd love to at least have an option and have it error if it's not available.
I know you're working on tool support. I'd love to be able to specify tools and just see the raw output of what tool it wants to select. This would be separate from actual tool support.
Requirements If you can, Please break-down the changes use cases, UX, technology, architecture, etc.
response_format