-
ollama /set parameter num_ctx 4096
Kunnen we dit gebruiken wanneer we zoiets doen:
llm_con$parameters$num_ctx
-
### Issue
When benchmarking ollama models in docker, I find I could not use the regular ollama api base url as in the documentation. What I needed was to use "http://host.docker.internal:11434" (MacO…
-
Not sure if technically possible but as intended ollama unloads the model from RAM when it's not being used for awhile. To increase the speed of the returned output maybe it could be possible to "ping…
-
Hi @lea-33 ,
how about introducing another LLM endpoint: [ollama](https://ollama.com/)? There were recently new vision-models published, namely [llama3.2-vision](https://ollama.com/library/llama3.2…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
```python
import simplemind as sm
from simplemind.providers.ollama import Ollama
sm.settings.DEFAULT_LLM_PROVIDER = "ollama"
Ollama.DEFAULT_MODEL = "llama3.1"
print(sm.generate_text("hi"))
`…
-
### Describe the bug
When accessing Bolt through a remote URL, Ollama models are not visible in the web UI, despite both services being individually accessible remotely. The models appear correctly…
-
I'm running docker and LLM-X in a container with the following compose.yml file:
```
version: '3.8'
services:
ollama:
image: ollama/ollama
runtime: nvidia
environment:
…
-
Excuse me. Here is my a piece of code:
```Python
extraction_strategy = LLMExtractionStrategy(
provider='ollama_chat/qwen2.5-coder',
url_base="http://localhost:11434",
…
-
{
"hub-mirror": [
"ollama/ollama:0.4.0-rc5"
]
}