-
### Issue you'd like to raise.
in docs,has a simple demo to use LongContextReorder to reorder docs.
but ,if i want use it with RetrievalQAChain ,how can i use it. thx
### Suggestion:
_No response…
-
Following the instructions in the Developer docs, out of the box I get:
```
(ollama) ➜ AI git clone https://github.com/ollama/ollama.git
Cloning into 'ollama'...
remote: Enumerating objects: 1077…
-
When I use duckduckgo to search, I get the following error:
![捕获](https://github.com/assafelovic/gpt-researcher/assets/90330685/64c6522c-7c04-4626-963a-e28d9707882f)
When I use tavily to search, I…
-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
WARNING 07-11 22:54:46 _custom_ops.py:14] Failed to import from vllm._C with Modu…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
Is it possible to use JPEG XL, which is lossless and more modern than PNG.
Probably this helps: https://pypi.org/project/pillow-jxl-plugin/
-
看代码,rerank的docs好像就是检索模型检索到的最相似的topk,再对这topk排序。比如选定topk=3传给大模型,rerank只是改变他们的顺序,听起来对效果提升帮助不大。请问能否实现,比如说检索出前10个相似的,再从这10个中rerank前3个,作为topk呢?
-
### Feature request
An integration of exllama in Langchain to be able to use 4-bit GPTQ weights, designed to be fast and memory-efficient on modern GPUs.
### Motivation
The benchmarks on the offi…
-
I guess it would be easy for you run the ggml [llama.cpp](https://github.com/ggerganov/llama.cpp) compatible models.
In this case, you don't need the GPU and could run the models in memory.
From a s…
-
### What happened?
Why did I set up the Ollama model, but it doesn't use the locally deployed model on Ollama to answer? The Ollama link is valid because the Ollama embedding model can work. However…