Closed chanwitkepha closed 9 months ago
Hi @chanwitkepha , thanks for checking this out!
If you are working from the "feature/local-llm" branch, please be aware that this is a work in progress.
That being said you should be able to get started playing with the "autogen_test.py" example, however you will also need to set up litellm.
For now, please reference this video for how to do this: https://www.youtube.com/watch?v=y7wMTwJN7rA&ab_channel=MatthewBerman
If you need more help I can provide further instructions.
Hi @chanwitkepha , thanks for checking this out!
If you are working from the "feature/local-llm" branch, please be aware that this is a work in progress.
That being said you should be able to get started playing with the "autogen_test.py" example, however you will also need to set up litellm.
For now, please reference this video for how to do this: https://www.youtube.com/watch?v=y7wMTwJN7rA&ab_channel=MatthewBerman
If you need more help I can provide further instructions.
Thank you for your help, I will try again by install litellm + ollam and config autogen-agi to connecting with litellm.
Finally I can config Autogen-AGI to work with LiteLLM + Ollama. Thank you.
Install and run LiteLLM with Ollama
pip install litellm litellm[proxy]
litellm --host 192.168.11.36 --port 28000 --model ollama/orca-mini &
Test LiteLLM with curl
curl --location 'http://192.168.11.36:28000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'
It shows output.
INFO: 192.168.11.36:42440 - "POST /chat/completions HTTP/1.1" 200 OK
{"id":"chatcmpl-cd0611f3-ed23-4d78-929e-d0d2419b32bc","choices":[{"finish_reason":"stop","index":0,"message":{"content":" As an AI assistant, I am not capable of knowing my own identity as I do not have a physical form. However, I can provide you with information about various LLMs (such as OpenAI's GPT-3) if you are interested.","role":"assistant"}}],"created":1704625177,"model":"ollama/orca-mini","object":"chat.completion","system_fingerprint":null,"usage":{"prompt_tokens":45,"completion_tokens":52,"total_tokens":97}}
Now test with Autogen-AGI
In file OAI_CONFIG_LIST.json
[
{
"model": "orca-mini",
"api_key": "NULL",
"base_url": "http://192.168.11.36:28000"
}
]
Then run python autogen_test.py
I already install ollama (From https://github.com/jmorganca/ollama) in Ubuntu Server 18.04 LTS.
Test with curl, It's OK.
Then I install autogen-agi in python 3.11 environment via Anaconda
Test with Autogen-AGI
In .env file
In file OAI_CONFIG_LIST.json
The I run
python autogen_test.py
It has error.
Please suggest how to fix this issue. Thank you.