-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
Using streamText with Ollama provider yields AI_JSONParseError.
It seems like everything's working, except it tries to `JSON.parse()` json snippit before they're fully read into the buffer.
Any…
-
### What is the issue?
I'm experiencing an issue with running the llama3 model (specifically, version 70b-instruct-q6) on multiple AMD GPUs. While it works correctly on ollama/ollama:0.1.34-rocm, I'v…
-
### What is the issue?
I sometimes find that Ollama runs a model that should be on the GPU on the CPU. I just upgraded to v0.1.32. I am still trying to find out how to reproduce the issue. I don't …
-
### What is the issue?
The ollama.ai certificate has expired today, ollama now can't download models:
```
ollama run mistral
pulling manifest
Error: pull model manifest: Get "https://registry.…
psy-q updated
5 months ago
-
### What is the issue?
`Error: llama runner process has terminated: signal: segmentation fault (core dumped)`. It occurs while loading larger models, that are still within the VRAM capacity. Here I…
-
### What is the issue?
I've installed the model in the Ollama Docker pod successfully. However, when attempting to execute a query, there seems to be an issue. I've tried running "ollama run llama3:i…
-
It is possible to specify a specific backend like `PolynomialRing(QQ, 'x', implementation="singular")` but the opposite (asking for a generic Sage implementation) is not. Fix this by allowing `imple…
-
Hi everyone, I'd like to contribute to the sbpy development. Specifically, the fitting routine for the different disk-integrated models in the sbpy.photometry.core module. I saw that there's already a…
-
Hi all,
I wanted to open a separate issue on compilers from the one raised PR #148 , as I did not want the discussion of environment variables, compilers, c++ libraries, etc. to distract from the n…