-
## Model
- [Falcon 7B Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)
## Steps
- [x] Use [openllm](https://github.com/bentoml/OpenLLM) library to load the model
- [x] Pass the model …
-
Hello ,
I am on a WSL2 system, and in a conda env virtual environment. Python is 3.9 and is installed with the libraries in the requirement.txt.
After running the chat.py, and entering the passw…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
I used `llama-cpp-python` with `Langchain`. I got an error when I tried to run the example code from Langchain doc.
I installed:
`CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_FLAGS='-DGGML_CUDA_FORCE_…
-
I tried to replace the LLM provider to Bedrock_converse and use the model-id of Haiku in the memory-template:
https://github.com/langchain-ai/memory-template but fails.
It relates to how trustcal…
-
PC Hardware:
- i7-9700kf
- Nvidia gtx 1660(6Gb)
- 16Gb memory
model used:
- TheBloke/wizardLM-7B-GPTQ
- wizardLM-7B-GPTQ-4bit-128g.no-act.order.safetensors
env:
Running in windows10 WSL2…
-
Ideally, a bot like [MiniAGI](https://github.com/muellerberndt/mini-agi/) would keep all previous interactions in memory but summarize old interactions once the context window is full. It appears like…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I tried to use the locally configured milvus for development work, and then it gave this exce…
zs856 updated
1 month ago
-
-
你好,我使用conda创建了一个python=3.10的新环境,执行`pip install -r requirements.txt`,发现`langchain`和`llama-index`有版本冲突。
疑问:什么样的版本是合适的?