Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
Allow free tier users to have unlimited chats with default chat model. It'll only be rate-limited and at the same rate as subscribed users
In the server chat settings, replace the concept of default/summarizer models with default/advanced chat models. Use the advanced models as a default for subscribed users.
For each chatmodeloption configuration, allow the admin to specify a separate value of max_tokens for subscribed users. This allows server admins to configure different max token limits for unsubscribed and subscribed users
Show error message in web app when hit rate limit or other server errors
chatmodeloption
configuration, allow the admin to specify a separate value ofmax_tokens
for subscribed users. This allows server admins to configure different max token limits for unsubscribed and subscribed users