Open Trawczynski opened 3 months ago
Try to add "max_token:2048" in config2.yaml file as following. Note: 2048 is int not string. llm: api_type: xxx model: xxx ... .... max_token: 2048
Try to add "max_token:2048" in config2.yaml file as following. Note: 2048 is int not string. llm: api_type: xxx model: xxx ... .... max_token: 2048
Uesful Answer!!
Bug description Hi, I have been struggling trying to run RAG using GPT-4o in the v0.8.1 of MetaGPT. When I run the first code example, it following error occurs:
This is my configuration file:
Bug solved method I have checked the code, and found that it happens because the context size of the
gpt-4o
model is not defined (this also happens withgpt-4-turbo
, which is not so recent) in themetagpt/utils/token_counter.py
file. Therefore, the default context size (3900) is used, resulting in this error.The exception is thrown by LlamaIndex, and is not informative enough to understand what is going on. This problem should be handled internally by MetaGPT. Adding a
context_size
field to the configuration file may be useful, as it would allow users to use models that are not yet supported, as well as limit the length of requests sent to the LLM provider (if there was a reason to do it).