-
Hi kijai,May I ask why extra_model_paths.yaml does not work for Kwaikolors' model? I pointed the path of the diffusers folder to another SSD hard drive, but it didn't work at all. Can you tell me what…
-
Ollama logs look awesome in Humanlog but can get a few improvements
![image](https://github.com/user-attachments/assets/eb731310-f80d-4df1-b287-8efb046ef410)
Logs attached: [ollama_serve_output…
-
## Description
When attempting to plot stock prices using the code execution tool, it only runs the bash script for package installation but fails to execute the subsequent Python code. Additionall…
-
Could we include the mentioned packages into tur pypi?
Actually, I want to package open-webui (a popular LLM frontend). It uses versioned packages which will easily conflict with general termux env…
-
maybe Read the input field of the prompt directly as llm prompt will be a good way
and vision llm seems not working, with error log:
```
Traceback (most recent call last):
File "E:\stable-diff…
-
### Describe the feature you'd like
I currently use Ollama via Open-WebUI in Docker. That provides a Ollama API but behind a per-user API key. That is more secure that having an open, unauthenticated…
-
Functional discussion for this project.
[notebooks/llm-chatbot](https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/llm-chatbot)
Intel's official documentation: https://www…
-
**Description**
Sometimes you have crashes like the ones below.
`ui_model_menu.py` is quite nice and dandy by reporting the python stack when it can, but as you can see below if you were only to …
mirh updated
2 months ago
-
### Describe the bug
Whenever I load up certain GGUFs, I get the above error message in the terminal. I have seen it happen on Bartowski Q8 quant of Llama3 70B Instruct (3-part file) and llama-3-70B-…
-
### Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…