Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
Improve
Fix
Details
Improve initialization flow during first run to remove need to configure Khoj:
Set Google, Anthropic Chat models too Previously only Offline, Openai chat models could be set during init
Add multiple chat models for each LLM provider Interactively set a comma separated list of models for each provider
Auto add default chat models for each provider in non-interactive model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set
Do not ask for max_tokens, tokenizer for offline models during initialization. Use better defaults inferred in code instead
Explicitly set default chat model to use If unset, it implicitly defaults to using the first chat model. Make it explicit to reduce this confusion
Resolves #882