-
### What is the issue?
Using `ollama:latest` with nvidia-docker and 2x4090.
Tried blasting a bunch of 256 words long text snippets to ollama for embeddings generation using `all-minilm:l6-v2`.
…
-
### This issue is to have a centralized place to list and track work on adding support to new ops for the MPS backend.
[**PyTorch MPS Ops Project**](https://github.com/users/kulinseth/projects/1/vi…
-
### What is the issue?
Error: llama runner process has terminated: signal: aborted (core dumped)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
1.40
-
### Your current environment
```text
The output of `python collect_env.py`
```
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorc…