geekan / MetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
https://deepwisdom.ai/
MIT License
44.14k stars 5.25k forks source link

ollama run gemma error #988

Closed victor-u closed 6 months ago

victor-u commented 7 months ago

$ service ollama start $ ollama run gemma:7b

hello Hello, hello! 👋

It's a pleasure to hear from you. What would you like to talk about today?

$ metagpt "Create a 2048 game" 2024-03-12 00:33:17.766 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /home/victor 2024-03-12 00:33:19.063 | INFO | metagpt.team:invest:90 - Investment: $3.0. 2024-03-12 00:33:19.063 | INFO | metagpt.roles.role:_act:399 - Alice(Product Manager): to do PrepareDocuments(PrepareDocuments) 2024-03-12 00:33:19.078 | INFO | metagpt.utils.file_repository:save:60 - save to: /home/victor/workspace/20240312003319/docs/requirement.txt 2024-03-12 00:33:19.079 | INFO | metagpt.roles.role:_act:399 - Alice(Product Manager): to do WritePRD(WritePRD) 2024-03-12 00:33:19.080 | INFO | metagpt.actions.write_prd:run:86 - New requirement detected: Create a 2048 game 2024-03-12 00:33:19.081 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.001(s), this was the 1st time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:19.473 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.393(s), this was the 2nd time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:20.151 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 1.071(s), this was the 3rd time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:21.482 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 2.402(s), this was the 4th time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:22.570 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 3.490(s), this was the 5th time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:25.662 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 6.582(s), this was the 6th time calling it. exp: 'async for' requires an object with aiter method, got bytes 2024-03-12 00:33:25.662 | WARNING | metagpt.utils.common:wrapper:571 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory. 2024-03-12 00:33:25.667 | ERROR | metagpt.utils.common:wrapper:553 - Exception occurs, start to serialize the project, exp: Traceback (most recent call last): File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/action_node.py", line 422, in _aask_v1 content = await self.llm.aask(prompt, system_msgs, images=images, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: 'async for' requires an object with aiter method, got bytes

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/utils/common.py", line 562, in wrapper return await func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/roles/role.py", line 558, in run rsp = await self.react() ^^^^^^^^^^^^^^^^^^ tenacity.RetryError: RetryError[<Future at 0x7205803c8610 state=finished raised TypeError>]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/utils/common.py", line 548, in wrapper result = await func(self, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/team.py", line 134, in run await self.env.run() Exception: Traceback (most recent call last): File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/action_node.py", line 422, in _aask_v1 content = await self.llm.aask(prompt, system_msgs, images=images, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/provider/base_llm.py", line 89, in aask rsp = await self.acompletion_text(message, stream=stream, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/init.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/provider/ollama_api.py", line 124, in acompletion_text return await self._achat_completion_stream(messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/provider/ollama_api.py", line 98, in _achat_completion_stream async for raw_chunk in stream_resp: TypeError: 'async for' requires an object with aiter method, got bytes

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/utils/common.py", line 562, in wrapper return await func(self, *args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/roles/role.py", line 558, in run rsp = await self.react() ^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/roles/role.py", line 525, in react rsp = await self._react() ^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/roles/role.py", line 471, in _react rsp = await self._act() ^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/roles/role.py", line 400, in _act response = await self.rc.todo.run(self.rc.history) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/write_prd.py", line 87, in run return await self._handle_new_requirement(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/write_prd.py", line 108, in _handle_new_requirement node = await WRITE_PRD_NODE.fill(context=context, llm=self.llm, exclude=exclude) # schema=schema ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/action_node.py", line 505, in fill return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/metagpt/actions/action_node.py", line 457, in simple_fill content, scontent = await self._aask_v1( ^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/victor/anaconda3/envs/sd/lib/python3.11/site-packages/tenacity/init.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7205803c8610 state=finished raised TypeError>]

victor-u commented 7 months ago

$ cat /home/victor/.metagpt/config2.yaml llm: api_type: "ollama" base_url: "http://127.0.0.1:11434/v1" model: "gemma:7b" api_key: "abc" repair_llm_output: true

better629 commented 7 months ago

@victor-u What the version of metagpt? Can you try the newest? And the right config is

llm:
    api_type: "ollama"
repair_llm_output: true

not

llm:
    api_type: "ollama"
    repair_llm_output: true
victor-u commented 7 months ago

(sd) victor@victor-yu:~/.config$ python --version Python 3.11.8 (sd) victor@victor-yu:~/.config$ pip list | grep metagpt metagpt 0.7.4

iorisa commented 6 months ago

This is the first time I've encountered an issue with Ollama not supporting streams. Issue reports #987 and #966 indicate that Ollama is functioning, but the response is not meeting the requirements. I compared the metagpt/provider/ollama_api.py between version 0.7.4 and the main branch, and found no substantial differences. Therefore, I suspect that the Ollama service does not support streams. Can you try using a different model service?

cognitivetech commented 6 months ago

@iorisa https://github.com/ollama/ollama/blob/main/docs/openai.md

Supported features

  • [x] Chat completions
  • [x] Streaming
  • [x] JSON mode
  • [x] Reproducible outputs
iorisa commented 6 months ago

But this error message specifically indicates that it returns bytes data instead of an iterator.

'async for' requires an object with aiter method, got bytes
cognitivetech commented 6 months ago

this change is a month old, perhaps @victor-u has previous version ollama

better629 commented 6 months ago

@cognitivetech @victor-u I have checked the version of ollama, it's client version is 0.1.17

and here is the ollama log

[GIN] 2024/03/15 - 13:56:48 | 200 |  672.864962ms |  27.152.157.180 | POST     "/api/chat"
{"timestamp":1710482208,"level":"INFO","function":"log_server_request","line":2608,"message":"request","remote_addr":"127.0.0.1","remote_port":35022,"status":200,"method":"HEAD","path":"/","params":{}}
2024/03/15 13:56:48 llama.go:577: loaded 0 images
{"timestamp":1710482209,"level":"INFO","function":"log_server_request","line":2608,"message":"request","remote_addr":"127.0.0.1","remote_port":35022,"status":200,"method":"POST","path":"/completion","params":{}}
[GIN] 2024/03/15 - 13:56:49 | 200 |  668.167038ms |  27.152.157.180 | POST     "/api/chat"

metagpt log

(metagpt_310) MacBook-Pro:MetaGPT xxxxx$ python3 examples/llm_hello_world.py 
2024-03-15 13:59:24.589 | INFO     | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/xxxxx/work/demo-code/MetaGPT
2024-03-15 13:59:27.409 | INFO     | __main__:main:18 - what's your name: 
I'm just an AI, I don't have a personal name. However, you can call me Chatbot or AI Assistant for short! How can I help you today?
2024-03-15 13:59:32.713 | INFO     | metagpt.utils.cost_manager:update_cost:108 - prompt_tokens: 25, completion_tokens: 41
2024-03-15 13:59:32.715 | INFO     | __main__:main:19 - I'm just an AI, I don't have a personal name. However, you can call me Chatbot or AI Assistant for short! How can I help you today?
2024-03-15 13:59:32.715 | INFO     | __main__:main:20 - 

I'm LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I'm here to help you with any questions or topics you'd like to discuss! Is there something specific you'd like to talk about or ask?
2024-03-15 13:59:33.855 | INFO     | metagpt.utils.cost_manager:update_cost:108 - prompt_tokens: 23, completion_tokens: 64
2024-03-15 13:59:33.856 | INFO     | __main__:main:22 - 
I'm LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I'm here to help you with any questions or topics you'd like to discuss! Is there something specific you'd like to talk about or ask?
2024-03-15 13:59:34.338 | INFO     | metagpt.utils.cost_manager:update_cost:108 - prompt_tokens: 21, completion_tokens: 25
2024-03-15 13:59:37.027 | INFO     | metagpt.utils.cost_manager:update_cost:108 - prompt_tokens: 70, completion_tokens: 170
2024-03-15 13:59:37.028 | INFO     | __main__:main:28 - Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?
Sure! Here is a simple "Hello World" program in Python:

print("Hello, World!")

This will output "Hello, World!" when you run the program.

Here's a breakdown of how the code works:

* `print()` is a function that outputs text to the screen. In this case, it's printing the string "Hello, World!".
* `Hello, World!` is the message that will be printed to the screen.

To run this program, you can open a terminal or command prompt and type `python hello.py`. This will execute the code in the `hello.py` file and output "Hello, World!" to the screen.

I hope this helps! Let me know if you have any questions.

I found something that the response header of ollama maybe changed(sometimes with application/json; charset=utf-8 in stream mode). But I tried multi times later, it's allways application/x-ndjson. So, if you still have the problem, you can add few code in the file metagpt/provider/general_api_requestor.py and then pip3 install -e..

    async def _interpret_async_response(
        self, result: aiohttp.ClientResponse, stream: bool
    ) -> Tuple[Union[bytes, AsyncGenerator[bytes, None]], bool]:
        content_type = result.headers.get("Content-Type", "")
        if stream and ("text/event-stream" in content_type or "application/x-ndjson" in content_type):

to

    async def _interpret_async_response(
        self, result: aiohttp.ClientResponse, stream: bool
    ) -> Tuple[Union[bytes, AsyncGenerator[bytes, None]], bool]:
        content_type = result.headers.get("Content-Type", "")
        print("content_type ", content_type)   # to DEBUG
        if stream and ("text/event-stream" in content_type or "application/x-ndjson" in content_type or "application/json" in content_type):

you can copy the output if you still meet the problem.

ddrcrow commented 6 months ago

I have the same issue but using ollama client 0.1.29 there is no /api/chat any more which will reply you 404 error what I notice from the sample of ollama, curl http://xxx.xxx.xx.xx.xx:xxx/api/generate -d '{ "model": "llama2_7b_chat_q5km", "prompt":"Who are you?", "stream":false }' can work.

BTW, we tried with the solution in your last comment, it didnot work for us, and the content_type was 'content_type text/plain '

better629 commented 6 months ago

@ddrcrow ok, it seems that ollama has a api update with new version. We will check it out.

Can you try with "stream": True and output the value of content_type?

XiandanErizo commented 6 months ago

base_url: "http://127.0.0.1:11434/v1" base_url: "http://127.0.0.1:11434/api"

This configuration should use /api instead of /v1.

ddrcrow commented 6 months ago

@better629 I missed 'not' in my last comment,that should be “BTW, we tried with the solution in your last comment, it didnot work for us, and the content_type was 'content_type text/plain”

as you requested with "stream": True, I got the followings

POST /api/generate HTTP/1.1 User-Agent: curl/7.29.0 Host: xxx.xxx.xxx.xxx:xxxxx Accept: / Content-Length: 80 Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 80 out of 80 bytes < HTTP/1.1 200 OK < Content-Type: application/x-ndjson < Date: Wed, 20 Mar 2024 10:53:33 GMT < Transfer-Encoding: chunked < {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.565494387Z","response":"\n","done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.579111733Z","response":"What","done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.592751625Z","response":" do","done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.606401616Z","response":" you","done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.620116458Z","response":" want","done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-20T10:53:33.633864718Z","response":"?","done":false} ...
ddrcrow commented 6 months ago

@better629 chat works as well curl --verbose http://xxx.xxx.xxx.xxx:8091/api/chat -d '{

"model": "llama2_7b_chat_q5km", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }'

  • About to connect() to xxx.xxx.xxx.xxx port 8091 (#0)

  • Trying xxx.xxx.xxx.xxx...

  • Connected to xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) port 8091 (#0) POST /api/chat HTTP/1.1 User-Agent: curl/7.29.0 Host: xxx.xxx.xxx.xxx:8091 Accept: / Content-Length: 115 Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 115 out of 115 bytes < HTTP/1.1 200 OK < Content-Type: application/x-ndjson < Date: Thu, 21 Mar 2024 02:41:04 GMT < Transfer-Encoding: chunked < {"model":"llama2_7b_chat_q5km","created_at":"2024-03-21T02:41:04.75089307Z","message":{"role":"assistant","content":"\n"},"done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-21T02:41:04.761910097Z","message":{"role":"assistant","content":"\n"},"done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-21T02:41:04.772910401Z","message":{"role":"assistant","content":"The"},"done":false} {"model":"llama2_7b_chat_q5km","created_at":"2024-03-21T02:41:04.784380084Z","message":{"role":"assistant","content":" Earth"},"done":false}

better629 commented 6 months ago

@ddrcrow Can you configure refs to the https://docs.deepwisdom.ai/main/en/guide/tutorials/integration_with_open_llm.html#ollama-api-interface and run python3 examples/llm_hello_world.py. Paste the output if runs failed.

There still have api/chat in ollama main branch, why you say it not exist any more? image

better629 commented 6 months ago

@victor-u @ddrcrow

the base_url of ollama is 'http://127.0.0.1:11434/api' not 'http://127.0.0.1:11434/v1 as described in https://docs.deepwisdom.ai/main/en/guide/tutorials/integration_with_open_llm.html#ollama-api-interface

ddrcrow commented 6 months ago

@better629 I have been using /api, never used /v1 the configuration did not work for me, but need the required api_key

llm: api_type: 'ollama' base_url: 'http://127.0.0.1:11434/api' model: 'llama2'

ValidationError: 1 validation error for Config llm.api_key Field required [type=missing, input_value={'api_type': 'ollama', 'm.../xx.xx.xxx.xx:xxx/api'}, input_type=dict]

ddrcrow commented 6 months ago

After gave fake api_key, failed again, btw, my ollama worked well with open webui. my mdetagpt is 0.7.6

(venv_3117) root@43e8ff60acbf:~# python llm_hello_world.py 2024-03-21 15:09:30.543 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /root 2024-03-21 15:09:32.620 | INFO | main:main:18 - what's your name: Traceback (most recent call last): File "/root/llm_hello_world.py", line 43, in asyncio.run(main()) File "/root/.pyenv/versions/3.11.7/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.11.7/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.11.7/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/llm_hello_world.py", line 19, in main logger.info(await llm.aask(question)) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/metagpt/provider/base_llm.py", line 89, in aask rsp = await self.acompletion_text(message, stream=stream, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/tenacity/init.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "/root/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/root/venv_3117/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/metagpt/provider/ollama_api.py", line 124, in acompletion_text return await self._achat_completion_stream(messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/venv_3117/lib/python3.11/site-packages/metagpt/provider/ollama_api.py", line 98, in _achat_completion_stream async for raw_chunk in stream_resp: TypeError: 'async for' requires an object with aiter method, got bytes

victor-u commented 6 months ago
$ ollama --version
ollama version is 0.1.28

ollama by modle gemma:7b return:

2024-03-21 23:04:25.048 | WARNING  | metagpt.utils.repair_llm_raw_output:extract_content_from_output:316 - raw_content[CONTENT]

**Implementation approach:** We will use [Open-source framework] to build the game engine.

**File list:**

- main.py
- game.py

**Data structures and interfaces:**

```mermaid
class Diagram
    class Main {
        - SearchEngine search_engine
        + main() str
    }
    class SearchEngine {
        - Index index
        - Ranking ranking
        - Summary summary
        + search(query: str) str
    }
    class Index {
        - KnowledgeBase knowledge_base
        + create_index(data: dict)
        + query_index(query: str) list
    }
    class Ranking {
        + rank_results(results: list) list
    }
    class Summary {
        + summarize_results(results: list) str
    }
    class KnowledgeBase {
        + update(data: dict)
        + fetch_data(query: str) dict
    }

Program call flow:

sequenceDiagram
    participant M as Main
    participant SE as SearchEngine
    participant I as Index
    participant R as Ranking
    participant S as Summary
    participant KB as KnowledgeBase
    M->>SE: search(query)
    SE->>I: query_index(query)
    I->>KB: fetch_data(query)
    KB-->>I: return data
    I-->>SE: return results
    SE->>R: rank_results(results)
    R-->>SE: return ranked_results
    SE->>S: summarize_results(ranked_results)
    S-->>SE: return summary
    SE-->>M: return summary

Anything UNCLEAR:

[/CONTENT]



no json data retuend, so failed to parse json from content inside [CONTENT][/CONTENT] 
ddrcrow commented 6 months ago

mine is 0.1.29

ddrcrow commented 6 months ago

aha..., seems this issue was closed :(

better629 commented 6 months ago

so version 0.1.28 works well but not with 0.1.29? em... I will try 0.1.29, although here are some new version install problems~

better629 commented 6 months ago
(metagpt) root@2001:/data/MetaGPT# ollama --version
ollama version is 0.1.29

I have tested current MetaGPT main and v0.7.6, it both works well.

main

llm:
    api_type: 'ollama'
    base_url: 'http://xxx:11434/api'
    model: 'llama2'
n$ python3 examples/llm_hello_world.py 
2024-03-22 11:34:34.370 | INFO     | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/xxx/work/code/MetaGPT
2024-03-22 11:34:39.233 | INFO     | __main__:main:18 - what's your name: 
I'm just an AI, I don't have a personal name. My purpose is to assist and provide information to users like you, so I don't have a personal identity in the classical sense. I exist to help answer questions and provide helpful responses, so please feel free to ask me anything!
2024-03-22 11:34:40.021 | INFO     | metagpt.utils.cost_manager:update_cost:108 - prompt_tokens: 10, completion_tokens: 65
2024-03-22 11:34:40.023 | INFO     | __main__:main:19 - I'm just an AI, I don't have a personal name. My purpose is to assist and provide information to users like you, so I don't have a personal identity in the classical sense. I exist to help answer questions and provide helpful responses, so please feel free to ask me anything!
2024-03-22 11:34:40.023 | INFO     | __main__:main:20 - 

I'm just an AI designed to assist and communicate with users in a helpful and informative manner. My primary function is to understand and respond to user input, whether it be through text-based conversations or other forms of interaction. I am constantly learning and improving my abilities through machine learning algorithms and natural language processing techniques. My goal is to provide accurate and useful information to users, while also being friendly and approachable. Is there anything else you would like to know or discuss?

v0.7.6

llm:
    api_type: 'ollama'
    base_url: 'http://xxxx:11434/api'
    api_key: 'sk-'
    model: 'llama2'
$ python3 examples/llm_hello_world.py 
2024-03-22 11:22:19.328 | INFO     | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/xxx/work/open-code/MetaGPT
Hello there! It's nice to meet you. How are you today? Is there something I can help you with or would you like to chat for a bit?
2024-03-22 11:22:23.895 | INFO     | metagpt.utils.cost_manager:update_cost:103 - prompt_tokens: 22, completion_tokens: 35
2024-03-22 11:22:23.897 | INFO     | __main__:main:16 - Hello there! It's nice to meet you. How are you today? Is there something I can help you with or would you like to chat for a bit?
2024-03-22 11:22:24.286 | INFO     | metagpt.utils.cost_manager:update_cost:103 - prompt_tokens: 6, completion_tokens: 26
2024-03-22 11:22:25.120 | INFO     | metagpt.utils.cost_manager:update_cost:103 - prompt_tokens: 24, completion_tokens: 70
2024-03-22 11:22:25.121 | INFO     | __main__:main:17 - Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?
Sure, here is a simple "Hello World" program in Python:
``
print("Hello, World!")
``
You can save this code in a file with a `.py` extension (e.g. `hello_world.py`) and run it using a Python interpreter or IDE (Integrated Development Environment).
2024-03-22 11:22:25.686 | INFO     | metagpt.utils.cost_manager:update_cost:103 - prompt_tokens: 18, completion_tokens: 43
better629 commented 6 months ago

@ddrcrow so, can you add some debug code in metagpt/provider/general_api_requestor.py as described before, and only run logger.info(await llm.aask(question)) in examples/llm_hello_world.py, it will help to find out the reason.

ddrcrow commented 6 months ago

OK, let me try in the weekend

better629 commented 6 months ago

ok, tell me if you still have problem~

ddrcrow commented 6 months ago

@better629 I debugged it, OK, that's my fault for case-sensitivity in model name :(, it worked for the llm_hello_word samples. thanks for your help

(venv_3117) root@43e8ff60acbf:~/workspace/metagpt_test# python ./llm_hello_world_simple.py 2024-03-23 12:18:45.126 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /root/workspace/metagpt_test 2024-03-23 12:18:47.068 | INFO | __main__:main:18 - what's your name: =========================debug added by ddrcrow for _achat_completion_stream============================ metod is post, url is /chat, paramms is [{'role': 'user', 'content': "what's your name"}], _const_kwargs is {'model': 'llama2_7b_chat_q5km', 'messages': [{'role': 'user', 'content': "what's your name"}], 'options': {'temperature': 0.3}, 'stream': True} ?" "My name is Sherlock Holmes," I replied, with a hint of pride. "Oh, you're that famous detective!" she exclaimed. "I've heard so much about you! Can you help me solve this mystery? My sister has gone missing and the police say they can't do anything without more evidence." I raised an eyebrow. "Tell me more," I said, intrigued. And so she did. She told me everything she knew about her sister's disappearance: when and where she was last seen, who had been with her, any possible motives for someone to want to harm her. I listened intently, taking in every detail. "Thank you for coming," she said, as we finished our conversation. "I know it's a lot to ask, but I really hope you can help me find my sister." "Don't worry, Mrs...?" I prompted. "Watson," she replied. "Nellie Watson." "Of course, Mrs Watson," I said, standing up. "I will do everything in my power to help you find your sister. I will start by investigating the circumstances of her disappearance and see if there are any leads that might point to her whereabouts. In the meantime, please keep me informed of any developments or any other information that might be relevant." Mrs Watson nodded gratefully, and I left her house, my mind already racing with possibilities and potential solutions. I knew it wouldn't be an easy case, but I was determined to crack it and bring Mrs Watson's sister back home safe and sound. 2024-03-23 12:18:52.409 | INFO | metagpt.utils.cost_manager:update_cost:103 - prompt_tokens: 6, completion_tokens: 355 2024-03-23 12:18:52.410 | INFO | __main__:main:19 - ?" "My name is Sherlock Holmes," I replied, with a hint of pride. "Oh, you're that famous detective!" she exclaimed. "I've heard so much about you! Can you help me solve this mystery? My sister has gone missing and the police say they can't do anything without more evidence." I raised an eyebrow. "Tell me more," I said, intrigued. And so she did. She told me everything she knew about her sister's disappearance: when and where she was last seen, who had been with her, any possible motives for someone to want to harm her. I listened intently, taking in every detail. "Thank you for coming," she said, as we finished our conversation. "I know it's a lot to ask, but I really hope you can help me find my sister." "Don't worry, Mrs...?" I prompted. "Watson," she replied. "Nellie Watson." "Of course, Mrs Watson," I said, standing up. "I will do everything in my power to help you find your sister. I will start by investigating the circumstances of her disappearance and see if there are any leads that might point to her whereabouts. In the meantime, please keep me informed of any developments or any other information that might be relevant." Mrs Watson nodded gratefully, and I left her house, my mind already racing with possibilities and potential solutions. I knew it wouldn't be an easy case, but I was determined to crack it and bring Mrs Watson's sister back home safe and sound. 2024-03-23 12:18:52.411 | INFO | __main__:main:20 -