-
How can I use the generated embeddings with the generateCompletion() function?
I tried setting it as an option
```
$embeddings = $ollamaClient->generateEmbeddings($documents, 'nomic-embed-text');
…
-
If we can tell a model to look at picture we should be able to tell it to read from a text file.
There are so many cases where I want to frame a question with data or text, that just doesn't work.
…
-
let's experiment https://github.com/StanGirard/quivr
* [ ] just try to make it work, with examplesgiven in the quivr docs
* [ ] goal 1 : can I train a brain,to fine tune the gpt for my static web…
-
我记得有人提过这个bug,最后解决方案是加了个Stop按钮。
我最近也出现这样的问题。以下是我的发现:
- 对话完整性没有问题,可以按照KB回答问题;但是不显示来源信息
- Log显示正常,可以显示完整的rerank信息
- 对话始终不会自行停止,按钮不会自动恢复成"Enter",一直显示“STOP”
感觉像是rerank后,reranking已经传到了LLM并且会话继续进行了,…
-
### Is there an existing feature request for this?
- [X] I have searched the existing issues
### Summary
Currently, we don't have a AI-based chat assistant in our site to assist new users to answer…
-
Hi, I want to upload finetune Llama 3-instruct to ollama, and following the [https://docs.unsloth.ai/tutorials/how-to-finetune-llama-3-and-export-to-ollama](url) to do it, but it didn't generate the M…
-
### What are you trying to do?
A huge problem with usability right now is that we can not let users enjoy browser based UIs from the box. There is the CORS protection that can not be removed and it b…
-
How to use stream to return a message even instead of waiting until everything has been processed
pslxx updated
1 month ago
-
我是跟着"【ChatOllama安装与配置教程】01 基于Docker安装ChatOllama,3分钟搞定100%本地化知识库"一步一步操作的
采用的是本地docker安装,但是安装的时候有个PeanutShell和视频里不一样,下载了好久,命令行显示大小8.9G,但是我看C盘占用至少20G,请问这个是什么呢?小白感觉很不安,求指点,感谢
-
Thanks for building this. The interface and functionality is very well done!
Do you have plans to integrate vector DBs into each "app". Like the ability to connect to PGVector, Chroma, Pinecone, et…