All-in-one AI CLI tool featuring Chat-REPL, Shell Assistant, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
3.93k
stars
265
forks
source link
max_output_tokens option from model in config.yaml is ignored when model is selected #641
Closed
henryprecheur closed 3 months ago
Describe the bug
I added Claud 3.5 Sonnet to my aichat configuration and I get the following error:
To Reproduce
config.yaml
Start
aichat
and try to use model:Expected behavior
The configured max_output_tokens option is used by aichat when submitting the request.
Environment (please complete the following information):