-
**Describe the bug**
At least with gemma2-27b, context length setting in Ollama model settings appeared to do nothing.
To get larger context length, I had to create an ollama modelfile with `PARAMET…
-
ollama support at this link https://github.com/win4r/MoA
win4r updated
1 month ago
-
Thanks for the amazing work, of course for a brand new beginner, it takes some times to setup the entire environment.
As readme already said the current best model to use is the gpt4o model. which ma…
-
The [readme](https://github.com/insulineru/ai-commit?tab=readme-ov-file#using-local-model-ollama) says "Set PROVIDER in your environment to ollama"
```bash
$ export PROVIDER=ollama
$ export AI_PR…
-
Current github seems to not embed any ollama engine and no quick installation document is provided.
A clear and concise description of the prerequisite and also ollama installation and config docum…
-
我第一次下载了这个节点,ollama,docker和openwebui,并且把ollama放在了docker里。这个时候,该节点可以正确的读出ollama中下载的模型,却不能够使用这些模型。之后,由于这些东西默认都放在C盘太占地方,而且下载的问题始终无法解决,所以我彻底删除了ollama,docker和openwebui,并且重新下载了ollama,将ollama的模型安装地址在系统变量中改到了D…
-
Is it possible to use a local LLM via Ollama. If, what's the setup and what's the requirement for which LLM I can use (guessing it has to use openai api syntax)?
-
**Describe the bug**
A clear and concise description of what the bug is.
System: Fedora 40, KDE spin, 16 GB internal memory, 12 × Intel® Core™ i5-10400 CPU @ 2.90G, NVIDIA GTX1650 super.
The fo…
-
**Update (9/30): https://github.com/miurla/morphic/issues/215#issuecomment-2381902347**
## Overview
Currently, Ollama support is an unstable and experimental feature. This feature is implemented…
-
- jupyter-ai: 2.23.0
- ollama3.1 working without issue
- mxbai-embed-large curl on jupyter terminal working without issue
When running `/learn` command, I am seeing that jupyter ai chat UI calling us…