-
可能是浏览器异常关闭导致
-
Hi everyone, I had previously posted about this "error", could anyone help me? As you can see, when trying to make the inline completion model work nothing happens, only the GPU usage goes to 100% pra…
-
### What is the issue?
Generating a response after first starting Ollama works flawlessly from what I can tell. I am able to change models and generate responses from prompts. After the model unloa…
-
To implement encoding caching and updating based on changes in the vault.txt file, you can modify the code as follows:
python
```
import torch
from sentence_transformers import SentenceTransform…
-
I apologize if this is not the appropriate place for questions, concerns, or suggestions regarding the project.
One of the major challenges with AI is how quickly things progress, and understanding…
-
运行环境:
ChatOllama使用docker compose安装并运行,Ollama使用docker运行。
在Setting中配置本地Ollama的Host:http://host.docker.internal:11434。
操作方法:
登录ChatOllama;
创建本地知识库,Embedding使用Ollama的nomic-embed-text(已下载);
选择一个PDF…
-
Firstly, thank you for all the amazing work! This is not a major critique, just a few bystander observations.
Lets start with few numbers with a comparable project in this space to show that this i…
-
I'm trying Calude-Dev with Ollama
I can connect and see all models I have and select one I want to use
When I try to run a prompt I have an issue: API Request Failed -> 404 Page not found
I see i…
-
- [ ] Screenshot of note + Copilot chat pane + dev console added **(required)**
I am using the qwen2.5:7b local model through Ollama, which works fine in version 2.6.0 of the Copilot plugin. Howeve…
-
Could you please add BGE-small-zh and Jina-small-zh as embedding model? It should improve embedding performance for Chinese Vault. Thank you.