-
I download this extension (page-assist) on my browser (brave) .I open ollama.com --> click on models --> llama3 --> ulually it had copy command button (ollama run llama3) but now i can see downlo…
-
`C:\Users\****>scoop update *
ollama: 0.1.42 -> 0.1.45
Updating one outdated app:
Updating 'ollama' (0.1.42 -> 0.1.45)
Downloading new version
Starting download with aria2 ...
Download: Download…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
Looks like the application is looking for api_key even when running in the local mode with ollama. App seems to be looking for the api key credentials, which should not be the case.
openai.OpenAIE…
-
Hello, first of all thanks for the amazing project.
I was able to run ipex with llama.cpp, it all worked fine, I was able to run on both CPU and GPU very fast. However, it didn't work for ollama.
…
-
is possible to add support to other LLMs like Ollama API ?
-
{
"platform":"",
"hub-mirror": [
[
"hub.docker.com/ollama/ollama"
]
}
-
After picking the snowflake embedding model in the options and reloading the vault i get this response: Error: Embedding function error: Error: Resource initialization failed: Error: Pipeline initiali…
Fau57 updated
2 weeks ago
-
-
### What is the issue?
I custom compile Ollama for AMD. I change the version file to read the custom version such as `0.1.47-amd`.
Yet, when I run `ollama -v ` within the compiled directory (I em…