SageMindAI / autogen-agi

AutoGen AGI: Advancing AI agents using AutoGen towards AGI capabilities. Explore cutting-edge enhancements in group chat dynamics, decision-making, and complex task proficiency. Join our journey in shaping AI's future!
https://www.linkedin.com/company/sagemind-ai
MIT License
241 stars 40 forks source link

Use autogen-agi with ollama (orca-mini) but result is error, please help. #6

Closed chanwitkepha closed 9 months ago

chanwitkepha commented 9 months ago

I already install ollama (From https://github.com/jmorganca/ollama) in Ubuntu Server 18.04 LTS.

ollama list

NAME                    ID              SIZE    MODIFIED
orca-mini:latest        2dbd9f439647    2.0 GB  2 hours ago

Test with curl, It's OK.

curl http://127.0.0.1:11434/api/generate -d '{
>   "model": "orca-mini",
>   "prompt": "Why is the sky blue?"
> }'
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.937363861Z","response":" The","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.963548218Z","response":" sky","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.98967223Z","response":" appears","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:55.015511833Z","response":" blue","done":false}
..........

{"model":"orca-mini","created_at":"2024-01-05T16:46:57.631165519Z","response":"","done":true,"context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,12056,322,266,7661,4842,31902,13,13,8458,31922,13166,31871,13,347,7661,4725,4842,1177,266,1124,906,287,260,1249,1676,6697,27554,27289,31843,1408,21062,16858,266,4556,31876,31829,7965,31844,357,19322,8634,12285,859,362,11944,291,22329,16450,31843,1872,16450,640,3304,266,1954,288,484,11468,31844,504,266,13830,4842,23893,31829,685,18752,541,4083,661,266,3002,2729,23893,31829,31843,672,1901,342,662,382,871,550,389,266,7661,31844,382,820,541,287,266,4842,23893,31829,661,266,2729,3688,31844,540,1988,266,7661,2024,4842,289,459,31843],"total_duration":4769137102,"load_duration":1973124460,"prompt_eval_count":46,"prompt_eval_duration":126935000,"eval_count":96,"eval_duration":2667461000}

Then I install autogen-agi in python 3.11 environment via Anaconda

conda create --name autogen-agi python=3.11
conda activate autogen-agi
python --version
Python 3.11.5

pip --version
pip 23.3.1 from /home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/pip (python 3.11)

Test with Autogen-AGI

In .env file

OPENAI_API_KEY=openai-api-key

GOOGLE_SEARCH_API_KEY=google-search-api-key
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=google-custom-search-engine-id
GITHUB_PERSONAL_ACCESS_TOKEN=github-personal-access-token

SERP_API_KEY=serp-api-key

# Recommended engine: google or serpapi
SEARCH_ENGINE=ddg

# Uncomment below if you want to use ollama on a remote host like google collab
# See: https://www.youtube.com/watch?v=Qa1h7ygwQq8&t=329s&ab_channel=TechwithMarco
OLLAMA_HOST=http://127.0.0.1:11434

In file OAI_CONFIG_LIST.json

[
    {
        "model": "orca-mini",
        "base_url": "http://127.0.0.1:11434/api/generate"
    }
]

The I run python autogen_test.py

It has error.

llm_config_user_proxy: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
llm_config_assistant: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
user_proxy (to assistant):

Please execute a python script that prints 10 dad jokes.

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/ssd-disk3-data/devteam/autogen-agi/autogen_test.py", line 51, in <module>
    user_proxy.initiate_chat(
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 544, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 344, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 475, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 887, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 619, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 244, in create
    response = self._completions_create(client, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 314, in _completions_create
    response = completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_utils/_utils.py", line 299, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 556, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 1055, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 834, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 877, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

Please suggest how to fix this issue. Thank you.

JKHeadley commented 9 months ago

Hi @chanwitkepha , thanks for checking this out!

If you are working from the "feature/local-llm" branch, please be aware that this is a work in progress.

That being said you should be able to get started playing with the "autogen_test.py" example, however you will also need to set up litellm.

For now, please reference this video for how to do this: https://www.youtube.com/watch?v=y7wMTwJN7rA&ab_channel=MatthewBerman

If you need more help I can provide further instructions.

chanwitkepha commented 9 months ago

Hi @chanwitkepha , thanks for checking this out!

If you are working from the "feature/local-llm" branch, please be aware that this is a work in progress.

That being said you should be able to get started playing with the "autogen_test.py" example, however you will also need to set up litellm.

For now, please reference this video for how to do this: https://www.youtube.com/watch?v=y7wMTwJN7rA&ab_channel=MatthewBerman

If you need more help I can provide further instructions.

Thank you for your help, I will try again by install litellm + ollam and config autogen-agi to connecting with litellm.

chanwitkepha commented 9 months ago

Finally I can config Autogen-AGI to work with LiteLLM + Ollama. Thank you.

Install and run LiteLLM with Ollama

pip install litellm litellm[proxy]

litellm --host 192.168.11.36 --port 28000 --model ollama/orca-mini &

Test LiteLLM with curl

curl --location 'http://192.168.11.36:28000/chat/completions' \
    --header 'Content-Type: application/json' \
    --data ' {
    "messages": [
        {
        "role": "user",
        "content": "what llm are you"
        }
    ]
    }'

It shows output.

INFO:     192.168.11.36:42440 - "POST /chat/completions HTTP/1.1" 200 OK
{"id":"chatcmpl-cd0611f3-ed23-4d78-929e-d0d2419b32bc","choices":[{"finish_reason":"stop","index":0,"message":{"content":" As an AI assistant, I am not capable of knowing my own identity as I do not have a physical form. However, I can provide you with information about various LLMs (such as OpenAI's GPT-3) if you are interested.","role":"assistant"}}],"created":1704625177,"model":"ollama/orca-mini","object":"chat.completion","system_fingerprint":null,"usage":{"prompt_tokens":45,"completion_tokens":52,"total_tokens":97}}

Now test with Autogen-AGI

In file OAI_CONFIG_LIST.json

[
    {
        "model": "orca-mini",
        "api_key": "NULL",
        "base_url": "http://192.168.11.36:28000"
    }
]

Then run python autogen_test.py

image

image