Open agnoldo opened 3 months ago
Hi @agnoldo
I completely agree with you. This is a very good idea! I wanted to add that you can actually achieve something similar using the PrivateGPT project. With PrivateGPT, you can run a model like Llama 3.1 8B locally and set up the same interfaces that OpenAI provides. This means you can easily swap out API calls to OpenAI with calls to your locally running model.
This approach could be a first workaround to get Promptimizer to work with local models.
Thanks for bringing this up!
Hey @agnoldo! Sorry for responding late; I never saw your initial message!
It should absolutely be possible! I've never used Groq, but if we could get it to work, that would be game-changing. Ollama is unfortunately too slow, at least on my computer.
I don't have time to implement this, but I'm open to PR requests! It should be relatively straightforward to add.
I was wondering if you could implement support for Groq and open source fast models such as Llama 3.1 8B. Imagine improving prompts for such a fast model, running at 1200 tokens/second! And cheaply. Or even locally, for those who need complete privacy...
@agnoldo For local inference, Ollama should work - any other local inference engine that exposes a compatible API should work, right? LlamaCpp, etc.
I haven't tried it myself yet in this project and don't have time to add a PR right now, but the breadcrumbs should all be here:
https://api.groq.com/openai/v1/chat/completions
should help (see docs)OPENAI_API_KEY=[...]
to GROQ_API_KEY=[...]
in both that file and your .env
:@austin-starks There's probably a much smarter way to do this but this looks enough like a nail to my hammer-minded approach...
Congratulations on your achievements, @austin-starks ! I see a huge potential for this project!
I was wondering if you could implement support for Groq and open source fast models such as Llama 3.1 8B. Imagine improving prompts for such a fast model, running at 1200 tokens/second! And cheaply. Or even locally, for those who need complete privacy...
Do you think this is feasible?
Thanks!