-
I have created a custom model using the `ollama create custom_model -f modelfile`. The custom model is based on codellama. Some examples and context are provided in the modelfile. In the CLI interface…
-
It would be amazing to have a `services.ollama` module in home manager, just like we have it in [NixOS](https://search.nixos.org/options?channel=unstable&show=services.ollama.enable&from=0&size=50&sor…
-
i did pip install -r requirements.txt successfully. now what? how do i run the app?
-
### Describe your problem
But LLM limited. Got Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow. Please assist. This software is very good and flexib…
-
### Bug Description
本地ollama无法聊天,我不知道哪里设置错误,请帮帮我!
### Steps to Reproduce
本地ollama无法聊天
### Expected Behavior
本地ollama无法聊天,其它应用可以
### Screenshots
![image](https://github.com/ChatGPTNextWeb/ChatGP…
-
### Bug Description
cannot link Ollama local serve. Ollama and ChatNext are both latest version. I can run get Ollama response from python script,so the server is OK.
### Steps to Reproduce
![微信图片_…
-
-
Hi, having this issue with connecting to external llms.
Enviroment server for remote LLM:
- Amd 79503xd
- 64 GB RAM
- 2x 7900xtx
- Using LM-STUDIO fosr hosting LLM server
Enviroment Cli…
-
### What is the issue?
Ollama is failing to run on GPU instead it uses CPU. If I force it using `HSA_OVERRIDE_GFX_VERSION=9.0.0` then I get `Error: llama runner process has terminated: signal: abo…
-
### What is the issue?
`Error: llama runner process has terminated: signal: segmentation fault (core dumped)`. It occurs while loading larger models, that are still within the VRAM capacity. Here I…