-
### Describe the bug
After running `openllm build dolly-v2` to download the dolly-v2 model, I attempted to run the command `openllm start dolly-v2` and got the following error:
I also got this s…
-
### Describe the bug
In dolly_v2 configuration the return statement seems to be looking for the key "generated_key" in the first element of the result. However no such key exists since the returned r…
-
### Describe the bug
I'm trying to run tiiuae/falcon-7b with vLLM backend with or without adapters.
It fails with a "Response not completed" error which is triggered after a Value Error as seen in t…
-
### Describe the bug
```
openllm start baichuan --model-id baichuan-inc/baichuan-13b-chat --backend vllm
2023-09-09T12:24:15+0800 [ERROR] [runner:llm-baichuan-runner:1] Traceback (most recent cal…
-
### Describe the bug
When attempting to use the OpenLLM to run a finetuned model, it fails to download any files with a '.bin' extension (usually model weights).
This issue results in missing file…
-
### Feature request
Nous Research and EleutherAI have recently released the **YaRN** model, which comes in two versions with context sizes of 64k and 128k. This model utilizes RoFormer-style embeddin…
-
We have a makefile to invoke openllm and build a bento, but it's no longer working. It fails with the following error.
```
# Build the BentoML service.
openllm build flan-t5 --model_id google/fla…
-
### Dify version
0.3.22
### Cloud or Self Hosted
Self Hosted
### Steps to reproduce
通过OPENLLM部署的ChatGLM2模型后端,通过Dify配置模板工程,测试的时候发现很容易提示prompt 或 query长度超限。
通过查看docker_api的日志发现,提示tokens长度超过该模型102…
-
### Feature request
It would be nice to support Database quering/agents where I could i define my database and credentials then prompt:
"show me the last 50 records in a table and bar graph"
#…
-
I see this line in the README: Visit http://localhost:3000/docs.json for OpenLLM's API specification.
Can we get the documentation without having to install/run openllm? I don't have a GPU in my lo…