-
### Describe the bug
```
openllm start baichuan --model-id baichuan-inc/baichuan-13b-chat --backend vllm
2023-09-09T12:24:15+0800 [ERROR] [runner:llm-baichuan-runner:1] Traceback (most recent cal…
-
### Describe the bug
I'm using conda to create env with python 3.10.12, and install related package using
```bash
pip install "openllm[llama, vllm]"
```
when i start a llama service using
```bas…
-
This issue will be used to track the progress of and coordinate with distributions along the [1.9 release](https://github.com/kubeflow/manifests/issues/2600).
While we hope all distros will manage to…
-
### Describe the bug
When trying to use Flan-T5 model I keep getting the following error:
```
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
I follow the …
-
This is part 1 of an internal refactoring to provide a nicer and flexible API for users to use LLM
Note that these are mostly internal changes, and hopefully it shouldn't affect users too much.
…
-
### Contact Details [Optional]
wawrzynski.adam@protonmail.com
### System Information
ZENML_LOCAL_VERSION: 0.40.1
ZENML_SERVER_VERSION: 0.40.1
ZENML_SERVER_DATABASE: sqlite
ZENML_SERVER_DEPLOYMEN…
-
I have created the .mar file over the all files of model . I'm not able to create the bentoml deployment over the .mar file
-
### Describe the bug
The issue is related to: https://bentoml.slack.com/archives/CKRANBHPH/p1682541555153939
I set my svc.api's input as bentoml.io.Multipart
`@svc.api(input=Multipart(image=Ima…
-
### Describe the bug
After updating OpenLLM from 0.1.20 to 0.2.0, I tried to load Baichuan-13B-Chat model as following:
```
openllm start baichuan --model-id /home/user/.cache/modelscope/hub/baic…
-
### Describe the bug
Somewhere along the way with the tag refactoring, `--model-id` failed to load model from local path.
We have an internal tracking issue on reworking the tag generation, which …