-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…
-
I get the following error: `The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens...`
-
**Describe the bug**
Renaming failed in the search results
**To Reproduce**
Steps to reproduce the behavior:
1. Search for the name of a folder
2. Long click the folder
3. Click “more option"
…
-
**Is your feature request related to a problem? Please describe.**
Sometimes prompts and inputs result in unpredictable LLM behaviour, especially at higher temperatures. This means that both the LLM …
-
### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
I am using vllm to deploy the qwen 7b chat model …
-
MacOS Sequoia 15.0.1
Horos v3.3.6
When I enter the application it prompts me about an update for "Horos Cloud." Once clicked, the update begins to run in the left panel and the progress bar cycles b…
-
## Is your feature request related to a problem?
Not exactly - I currently have a CLI app that's using listr2, but tasuku looks nicer in that each task can have a return value, rather than listr2 w…
mmkal updated
7 months ago
-
Hi, can the code be updated to allow BREAKing inside regions and long prompts for regions in matrix mode? I'm not sure yet what breaks the prompt with long prompts in regions, but BREAK handling is ob…
-
Grammar/spelling check
-
Issue templates are very helpful for a collaboration repo. When users identify a bug or want to add a new feature, you can provide templates so you can collect all the pertinent information you need t…