Open Isaadahmed2 opened 2 months ago
I've tried it many times, it seems it only works properly with GPT4o for the moment. I assume a finetune of the local LLMs, seems to be the way to go from here.
I'm trying, but my localization of the large model has been calling unsuccessfully
try llama3.1 api from groq its free, Added support for Groq
Is there is anyway we can use hugging face model or open source LLM or quantized Open sources LLM for the purpose or we can use gemini free usage quota?