Open jtara1 opened 3 months ago
Ollama is not as effective as OpenAI. Try to set repair_llm_output: true
in config2.yaml to get a better output.
Still getting the same error with that config option. ~/.metagpt/config2.yaml
llm:
api_type: 'ollama'
base_url: 'http://127.0.0.1:11434/api'
model: 'llama3'
repair_llm_output: true
I'm getting a similar error with Ollama: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
running metagpt
from the commandline. I have an issue: https://github.com/geekan/MetaGPT/issues/1383. I added repair_llm_output: true
to ~/.metagpt/config2.yaml
, but I still get the same error.
$ metagpt "Create a 2048 game"
2024-07-07 14:37:19.995 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/ken/devel/ai/agents/metagpt_novel
2024-07-07 14:37:27.413 | INFO | metagpt.team:invest:90 - Investment: $3.0.
2024-07-07 14:37:27.414 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do PrepareDocuments(PrepareDocuments)
2024-07-07 14:37:27.535 | INFO | metagpt.utils.file_repository:save:57 - save to: /Users/ken/devel/ai/agents/metagpt_novel/workspace/20240707143727/docs/requirement.txt
2024-07-07 14:37:27.538 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do WritePRD(WritePRD)
2024-07-07 14:37:27.540 | INFO | metagpt.actions.write_prd:run:86 - New requirement detected: Create a 2048 game
Sure! Here's an example of how you could fill in the nodes for a 2048 game project:
### Language
Python
### Original Requirements
Create a 2048 game where players can move tiles around to combine them and reach the goal of having a 2048 tile.
### Project Name
Game_2048
### Product Goals
1. Create an engaging user experience
2. Improve accessibility, be responsive
3. More beautiful UI
### User Stories
1. As a player, I want to be able to choose difficulty levels so that the game is challenging but not too hard or too easy.
2. As a player, I want to see my score after each game so that I can track my progress and compete with others.
3. As a player, I want to get a restart button when I lose so that I can try again without feeling frustrated.
4. As a player, I want to see beautiful UI that makes me feel good and motivates me to play more.
5. As a player, I want to play the game via mobile phone so that I can play it anywhere, anytime.
### Competitive Analysis
1. 2048 Game A: Simple interface, lacks responsive features
2. Play2048.co: Beautiful and responsive UI with my best score shown
3. 2048game.com: Responsive UI with my best score shown, but many ads
### Competitive Quadrant Chart
Title: Reach and engagement of campaigns
X-axis: Low Reach --> High Reach
Y-axis: Low Engagement --> High Engagement
Quadrant 1: We should expand
Quadrant 2: Need to promote
Quadrant 3: Re-evaluate
Quadrant 4: May be improved
Campaign A: [0.3, 0.6]
Campaign B: [0.45, 0.23]
Campaign C: [0.57, 0.69]
Campaign D: [0.78, 0.34]
Campaign E: [0.40, 0.34]
Campaign F: [0.35, 0.78]
Our Target Product: [0.5, 0.6]
### Requirement Analysis
Not applicable for this project.
### Requirement Pool
1. P0: The main code of the game should be easy to understand and modify.
2. P0: The game algorithm should be efficient and fast.
3. P1: The UI should be visually appealing and easy to use.
4. P2: The game should be responsive and work well on different devices.
### UI Design draft
Basic function description with a simple style and layout.
### Anything UNCLEAR
Nothing is unclear for this project.
2024-07-07 14:40:39.306 | WARNING | metagpt.utils.cost_manager:update_cost:49 - Model llama2:latest not found in TOKEN_COSTS.
2024-07-07 14:40:39.317 | WARNING | metagpt.utils.repair_llm_raw_output:extract_content_from_output:320 - extract_content try another pattern: \[CONTENT\]([\s\S]*)\[/CONTENT\]
2024-07-07 14:40:39.318 | WARNING | metagpt.utils.repair_llm_raw_output:run_and_passon:268 - parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting value: line 1 column 1 (char 0)
2024-07-07 14:40:39.318 | INFO | metagpt.utils.repair_llm_raw_output:repair_invalid_json:237 - repair_invalid_json, raw error: Expecting value: line 1 column 1 (char 0)
2024-07-07 14:40:39.319 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 191.776(s), this was the 1st time calling it. exp: RetryError[<Future at 0x130ca46d0 state=finished raised JSONDecodeError>]
Ollama is definitely being used. The CPU usage is at 700% and my laptop is humming.
The problem is at
def _decode_and_load(self, chunk: bytes, encoding: str = "utf-8") -> dict:
chunk = chunk.decode(encoding)
return json.loads(chunk)
The error info is
chunk
'data: {"id":"chatcmpl-471","object":"chat.completion.chunk","created":1720838591,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"I"},"finish_reason":null}]}\n'
json.loads(chunk)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
maybe you can modify as follows:
def _decode_and_load(self, chunk: bytes, encoding: str = "utf-8") -> dict:
chunk = chunk.decode(encoding)
json_data = chunk.removeprefix('data: ').strip()
# print("json_data: ", json_data, "/n")
if len(json_data) == 0:
return {}
elif json_data.lower().find("done") != -1:
return {"done": True}
else:
ret = json.loads(json_data)
delta = ret.get('choices', [{}])[0].get('delta', {})
ret["message"] = delta
return ret
async for raw_chunk in stream_resp:
chunk = self._decode_and_load(raw_chunk)
if chunk == {}:
continue
if not chunk.get("done", False):
...
Bug description
Project Manager transforms Architect's WriteDesign, but not as JSON then it tries to parse it as JSON
run command
Environment information
main
branch probably, likely commit38cea1daf2b87ebc31a56c995d7857d07289fa70
pip 22.3.1
pip install --upgrade git+https://github.com/geekan/MetaGPT.git
Screenshots or logs
WriteDesign from Architect
WriteTasks from Project Manager
actual error from stderr