-
### Describe the bug
Followed this [quickstart](https://docs.bentoml.org/en/latest/quickstarts/deploy-a-large-language-model-with-openllm-and-bentoml.html#), but encountered a typeError when trying t…
-
### Describe the bug
[ERROR] [runner:llm-falcon-runner:2] Exception occurs when trying to perform inference on server.
### To reproduce
1. `openllm start falcon --model-id tiiuae/falcon-7b-instruct…
-
### Describe the bug
Somewhere along the way with the tag refactoring, `--model-id` failed to load model from local path.
We have an internal tracking issue on reworking the tag generation, which …
-
### Describe the bug
Hi there, thanks for providing this brilliant work!
I cannot run Baichuan-13B-Chat model successfully, it said the model ``is not found in BentoML store , you may need to run …
-
### Describe the bug
After updating OpenLLM from 0.1.20 to 0.2.0, I tried to load Baichuan-13B-Chat model as following:
```
openllm start baichuan --model-id /home/user/.cache/modelscope/hub/baic…
-
### Describe the bug
I tried to load Baichuan2-13B-Chat model but failed with exception.
I wonder if it's supported or I have to config it somehow.
### To reproduce
Run
`TRUST_REMOTE_CODE=True op…
-
### Feature request
I have deployed an OpenLLM on a managed service that is protected by the use of an auth Bearer token:
```bash
curl -X 'POST' \
'https://themodel.url/v1/generate' \
-H "A…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
2023-11-19T23:09:00+0800 [ERROR] [cli] Exception in callback
Traceback (most recent call last):
File "/tmp2/t12902101/miniconda3/envs/ol/lib/python3.11/site-packages/tornado/ioloop.py", line 919, i…
-
### Feature request
Adding support for the Mistral architecture to OpenLLM.
### Motivation
The recently released Mistral 7B model is claimed to match the performance of Llama 2 13B, which makes it …