Open seyeong-han opened 1 month ago
you could check out Groq too, which offers free llama-3 models that can be useful for debugging
I found the local LLM model would be good enough for Jockey developers to test this library. So, I decided to integrate the llama3 model using Ollama for this Jockey project.
Good stuff to cononect Ollama and Langgraph docker networks. genai-stack
@seyeong-han Thank you! Do you want to start working on a PR for this issue?
Motivation
I wanted to participate more in solving the listed issues, but I already spent more than $30 on debugging with the ChatGPT API, lol.
Recently, Mistral announced that they have reduced their API prices by up to 80%. I was curious to find out which model is the most affordable. Therefore, I created a table to compare the prices of different models in order to determine which API would be the cheapest to implement.
To advocate participation without having cost concerns, I want to implement Mistral LLM models, which are way cheaper than the current GPT-4o API.
I know that using a small model hinders agents' performance, but it would be good for fast debugging and improving features.