Open Billy-Mitchell opened 5 months ago
with the code base now it would not be as easy as changing a few line of code. Unless we wait for ollama to implement function calling as i do not believe they have yet. We could always try though prompt engineering but it would not be as robust.
Here is how your curl command would have to look that you send:
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3",
"stream": false,
"format": "json",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
Hi and thank you for your response.
I am quite new to the AI/LLM world and there is a lot to learn, so I didn't fully realize that function calling was something primarily available in OpenAI and other paid services.
I see that it is something planned in LangChain, but as you say, it would probably involve more than just changing a few lines of code.
Thanks again for your response :-)
I came across your video 'Forget CrewAI & AutoGen, Build CUSTOM AI Agents!' and thought you made many good points, so I wanted to try the code myself. I don't have an OpenGPT account, so I wanted to try to adapt the code to use Ollama and llama3 instead. It all seemed quite manageable, but the problem I'm a bit stuck on / wondering how to get around or reimplement is the 'tool_calls' object that OpenGPT apparently has in its return JSON.
Do you have any suggestions or insights on how to best work around this? I tried following the 'solution' here (https://github.com/ollama/ollama-python/issues/39#issuecomment-1973335681), but I can't get it to include 'tool_calls'. Any ideas or suggestions?