-
**Is your feature request related to a problem? Please describe.**
For RAG QA often we want to fully utilize the context window of the model by inserting as many retrieved documents as possible. Howe…
sjrl updated
5 months ago
-
**Objective:**
Create an advanced Embedding Evaluator tool designed to assess the quality of embeddings generated for context injection into a Language Large Model (LLM). The tool will use specific m…
-
### What is the issue?
The quality of the results returned by the embedding model now is much worse than the previous version.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1…
-
Lots of multilingual datasets listed here https://docs.google.com/spreadsheets/d/1qf0iYejG-9RgEEi13qB_SK_178-eNaeJDmSDNSj260A/edit?gid=1875159366#gid=1875159366 from https://blog.voyageai.com/2024/06/…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hello,
How can I do multi-doc RAG using Weaviate as vector store?
-
應用建議
1. 翻譯器
* 中翻英 / 中翻日 / 英翻中 / 日翻中
2. AI 秘書
* 會幫你記錄待辦事項,當天提示你該日行程
3. AI 女友 / 男友
* 角色扮演
4. RAG (Retrieval Augmented Generation)
* 檢索後交給 LLM 看過再回答
* 可以用 https://serper.…
-
I have RTX4060 8GB in my laptop with 16gb ram and intel i7-12700H cpu when i run build-llama.sh or build-mistral.sh it gets killed automatically with below output and I found that my cpu gets 100% uti…
-
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | ti…
-
I run `python privateGPT.py and met this error. Could you help to take a look, thx
```sh
python privateGPT.py
Enter a query: give me a summary
Traceback (most recent call last):
File "/Users/…
-
### Project Name
Beauty AI Assistant
### Description
An innovative "AI Beauty Assistant" that merges cutting-edge technology with beauty expertise. By integrating the best-selling beauty and skinc…