Open hedihadi opened 1 month ago
@hedihadi
For a tuned model, JSON mode is not supported, and also you cannot pass the system instruction.
Please refer the link for the current limitations of tuned model.
This is a valid feature request.
@Gunand3043, that's a good resource. Thanks for the link.
this sometimes doesn't respect the response_schema structure
That shouldn't be possible, it uses constrained decoding. Can you explain more? Is it just that it leaves out fields sometimes?
Hey Guys, do we have any update on this feature, do we know if its being built or in the pipeline? Would be great to know this. Thanks
Description of the bug:
when i use
gemini-1.5-flash
orgemini-1.5-pro
i have no problem asking for json return like sothis sometimes doesn't respect the response_schema structure, so i decided to fine-tune a model to get more predictable data. i did that and basically replaced the model name like so
also dealing with the oauth from this documentation https://ai.google.dev/gemini-api/docs/oauth as i'm now using a fine-tuned model.
but now, when i run the same code i get this error
400 Developer instruction is not enabled for tunedModels/quranchat-q68kkpnh5aad
and after some horrendous hours trying to see what this means, i realized it's because i havesystem_instruction
variable, so i deleted that and ended up with this codeand ran the file again, now i get this error
400 Json mode is not enabled for tunedModels/quranchat-q68kkpnh5aad
which this one is also happening because i have'response_mime_type':"application/json"
in my generation_config variable.Actual vs expected behavior:
so i basically realized i can't use system instruction nor ask for json response when using fine-tuned data, is this a library restriction or gemini?
Any other information you'd like to share?
let me know if there's anything more i can share