-
**问题描述 / Problem Description**
使用多张卡共担显存时,回复乱码
**环境信息 / Environment Information**
- langchain-chatchat版本/commit 号:0.2.10
- 是否使用 Docker 部署(是/否):否
- 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM3-6B-32k…
-
Really impressive work! I am a Python guy, is anyone interested in rewriting this project in Python?
-
### System Info
LangChain version 0.0.330
Chroma version 0.4.15
This may either be a true bug or just documentation issue, but I implemented the simplest possible version of a ConversationalRetri…
-
### Issue you'd like to raise.
I have created a retrieval QA project. In this project, I want to add memory to LLM so that it can also remember the previous chat. I have configured the LLM not to ans…
-
At the moment we insert both system prompt (aka `prompt_prefix`) and conversation history in the prompt, without respecting model-specific prompt tags and treating every model as a completion model.
…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
# Issue: Handling ingestion file types
The issue is handling the variety of file types.
e.g.
```py
DOCUMENT_MAP = {
".txt": TextLoader,
".py": TextLoader,
".pdf": PDFMinerLoad…
-
### System Info
Langchain Version: 0.0.354 (also tried with 0.1.0)
Python version: 3.9.18
yfinance version: 0.2.35
OS: Windows 10
### Who can help?
@hwchase17 , @agola11
### Information…
-
### System Info
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
the code to ask:
async for chunk in runnable.as…
-
### Feature request
Hi there 👋
Thanks a lot for the awesome library. The current implementation of `BaseCache` stores the prompt + the llm generated text as key.
This means that I am not real…