-
- [ x] Screenshot of note + Copilot chat pane + dev console added **(required)**
**Describe how to reproduce**
I am trying to use QA functionality but having issues setting up local embedding. I a…
-
First of all, thank you so much for building Perplexica! It's superhelpful to be able to use something like Perplexity with Ollama.
I have a feature request: It would be great if Perplexica allow t…
huytd updated
4 months ago
-
我是跟着"【ChatOllama安装与配置教程】01 基于Docker安装ChatOllama,3分钟搞定100%本地化知识库"一步一步操作的
采用的是本地docker安装,但是安装的时候有个PeanutShell和视频里不一样,下载了好久,命令行显示大小8.9G,但是我看C盘占用至少20G,请问这个是什么呢?小白感觉很不安,求指点,感谢
-
### Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [x] I'm not able to find an [open issue](ht…
-
Thanks for building this. The interface and functionality is very well done!
Do you have plans to integrate vector DBs into each "app". Like the ability to connect to PGVector, Chroma, Pinecone, et…
-
I bind "s-m" to `gptel-menu`, when I hit `s-m`, I got:
```elisp
Debugger entered--Lisp error: (void-function gptel--sanitize-model)
gptel--sanitize-model()
gptel-menu()
funcall-interactivel…
-
While I understand using gpt4 gives the best results, the landscape changes very quickly. Also some users have strict security requirements to run only local llms.
Instead of trying to support all …
-
Hello! I'm want to buy Lenovo Xiaoxin 14 AI laptop on AMD Ryzen 7 8845H on my birthday and I will install Artix Linux to this. Do you will to add AMD Ryzen NPU support to Ollama on Linux and Windows? …
-
Hi, thank you for the wonderful ollama project and the amazing community!
I am testing the Mixtral 3Bit Quantized model under a RTX400 with 20GB of VRAM. The model is 20GB of size and as you ca…
-
It would be great to rename chat conversations and even be able to press a button to ask the AI to do so. I tried to ask the AI but it is always too wordy. It would be preferable to set up a conversat…