-
**Description:**
During installation if a newer version of ollama is available get user confirmation and update in process.
For all supported platforms Mac, Linux, Windows.
-
### What is the issue?
Hi,
I would like to ask your help.
I am running Ollama with the following GPU, but it seems that it is not picking up my GPU. Is there any advice ?
AMD Ryzen™ 7 7840U pr…
-
IBM Granite 3.0 is the latest evolution of large language models (LLMs) designed specifically for enterprise use. In this short blog, I want to demonstrate how to deploy the new Granite 3.0 models on …
-
### What is the issue?
When my pc goes to sleep sometime the gpu connection is lost
`2024/11/15 19:56:13 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP…
-
### Describe the bug
- Cloned the repo
- Installed everything needed
- Created the modelfile FROM qwen2.5-coder:7b
PARAMETER num_ctx 32768
and run the query on powershell but either i don't see o…
-
Hi Team,
I am already using LMStudio and OLLAMA for model deplyments. Given this model is LMCPP compatible and uses that. How can this model be deplyment, hosted and used with LMStudio or OLLAMA. It …
-
here is the the container parameters :
export DOCKER_IMAGE=intelanalytics/ipex-llm-inference-cpp-xpu:latest
export CONTAINER_NAME=ipex-llm-inference-cpp-xpu-container
podman run -itd \
…
-
> Please 👍 this feature request if you want chatgpt-shell to support different models (see [parent feature request](https://github.com/xenodium/chatgpt-shell/issues/244)). Also consider [sponsoring](h…
-
sometimes, while using nano-graphrag, the progress stuck while doing Entity Extraction process.
while stuck, the output of terminal looks like below:
```
"Processed 26 chunks, 378 entities found…
-
It's time to add a new model provider that can run locally named "Ollama."
Here's the documentation: https://github.com/ollama/ollama/blob/main/docs/api.md
Goal:
- Create a script that can comm…