-
### System Info
Name: langchain
Version: 0.0.251
Name: faiss-cpu
Version: 1.7.1
Name: llama-cpp-python
Version: 0.1.77
### Who can help?
_No response_
### Information
- [X] The official …
-
I execute code from tutorials:
```
from promptwatch import register_prompt_template, PromptWatch
from langchain import OpenAI, LLMChain, PromptTemplate
prompt_template = PromptTemplate.from_te…
-
Hello, trying to figure out why my h2ogpt doesn't use my GPU at all. Figured that something has to be wrong with bitsandbytes, since it says it was compiled without GPU support. I made everything work…
-
Hi Everyone. I'm trying to use the fresh new MPT-7b included in vllm. I'm running on SageMaker Studio, in a g4dn.2xlarge instance, however, I'm getting the following error:
`RuntimeError: probabili…
-
**问题描述 / Problem Description**
按要求安装requirements.txt,核实版本langchain==0.0.257
初始化知识库:python init_database.py --recreate-vs
报错:exception: partition() got an unexpected keyword argument 'autodetect_enc…
-
## Description
After installation, I typed
%load_ext jupyter_ai
%%ai chatgpt -f math
Generate the 2D heat equation and get errors:
RateLimitError Traceback…
-
I want to use llamaindex but I don't want any data of mine to be transferred to any servers. I want it all to happen locally or within my own EC2 instance. I have seen https://github.com/jerryjliu/lla…
-
按照[install文档](https://github.com/imClumsyPanda/langchain-ChatGLM/blob/master/docs/INSTALL.md)安装的,
最后一个执行错误,
python loader/image_loader.py
Traceback (most recent call last):
File "/langchain-Chat…
-
配置文件model_config.py为:
import torch.cuda
import torch.backends
import os
import logging
import uuid
LOG_FORMAT = "%(levelname) -5s %(asctime)s" "-1d: %(message)s"
logger = logging.getLogger(…
-
使用inference.py,发现结果异常
`python inference.py --model_type llama --base_model IDEA-CCNL/Ziya-LLaMA-13B-v1 --lora_model shibing624/ziya-llama-13b-medical-lora --with_prompt --interactive`
下载bas…