Closed den47999 closed 23 hours ago
Еще подскажите в настройках указана модель ее контекст 128000 токенов но при отправке в чат длинный контекст 5000 токенов вылезает ошибка.
Hi @den47999 ! We've limited the token limit used for this model. Thanks for pointing this out, I'll update it.
You'd like a custom prompt in the cloud instance?
I want the number of tokens to be 128000 as it helps me with writing code, as well as add temperature selection and promt roles
Hey @den47999 , are you self-hosting Khoj or using the Khoj cloud service at https://app.khoj.dev? On the Khoj cloud service, we limit the context window to 60K tokens currently for subscribed users and 30K tokens for free tier users to manage service costs. If you're self-hosting you can set the max prompt size to whatever you like.
For custom prompts, we've recently enabled the ability to create agents on Khoj cloud (or self-hosted) with their custom prompts, tools and knowledge base.
Temperature, is set by the application based on the chat model and sub task the model is expected to do (e.g generate document search queries, infer output mode etc.). It is not user or admin controllable currently. It'd be good to know what you're hoping to get by changing the temperature. This will allow us to find the best way to enable that use-case.
Closing this issue for now. But let me know if that doesn't answer your requests?
Good afternoon:) Can you add the ability to add your own PROMT and Raleigh? Also add the ability to adjust the temperature as I would like to set 0.1-0.3. Also tell me in the settings is specified model its context 128000 tokens but when sending to the chat long context 5000 tokens error comes out.
414 Request-URI слишком большой