It seems that no matter what I specify as a "model" in task.json, the majority of costs are still for gpt-4o.
If I specify model as "gpt-3.5-turbo", scrip will use this model, but it will also use gpt-4o at the same time.
Last run where I specified "gpt-3.5-turbo" as model, code made 2 request to gpt-3.5-turbo and 30 requests to gpt-4o
@JustUser1410 you are correct, multi agents currently only supports OpenAI for optimized multi agent performance. If you're interested in using any other LLM you can try the root directory functionality
It seems that no matter what I specify as a "model" in task.json, the majority of costs are still for gpt-4o. If I specify model as "gpt-3.5-turbo", scrip will use this model, but it will also use gpt-4o at the same time.
Last run where I specified "gpt-3.5-turbo" as model, code made 2 request to gpt-3.5-turbo and 30 requests to gpt-4o