-
### What is the issue?
I'm trying to get the project to compile on Gentoo but am running into some issues as Gentoo uses different paths.
On Gentoo, rocm libraries get installed into /usr/lib64, h…
-
Hey. Not a computer scientist here, but thought you guys'd like to know that the latest pushed container image is causing issues with gpu inference for me.
System specs
CPU: AMD Ryzen 3600
GPU: I…
-
**Describe the bug**
After upgrading from 0.34.0 to 0.35.0, my code doesn't compile, because OllamaModelsBuilder methods are now package private.
![image](https://github.com/user-attachments…
-
Running the `inflation.py` example from the rep. I am expecting it calls the custom tool for `get_ticker_data` function, which is defined at the folder `custom_tools` by `ticker_data.py`. However, ba…
-
### What is the issue?
I am working on a modelfile and did the following...
```
ollama run --verbose llama3.1:8b
/set parameter num_ctx 131072
/set parameter num_predict -2
/save llama3.1:8b…
-
{
"platform":"",
"hub-mirror": [
"若对OS/ARCH无要求,platform请留空,不要加任何值,默认就是linux/amd64",
"如需切换arm架构,请修改platform为arm64或linux/arm64/v8",
"格式:你需要转换的原始镜像$自定义镜像名:自定义标签名 (其中 …
-
-
I am running ollama on my Google compute instance behind an nginx proxy. I can navigate to the endpoint and confirm `/api/tags` returns a response as well as other endpoints such as `/api/version`.
…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…