-
### Additional comment:
I would like to have the latest version, and am happy to help maintain,
-
YOLOv5 converted models or Pytorch Lightning models to ONNX are of `onnx.onnx_ml_pb2.ModelProto` format.
I am using BentoML for deploying my ONNX model but receiving this error
Error
```
❯ ben…
-
### Describe the bug
when I load my local model
openllm start chatglm --model-id /chatglm-6b
I get a error
openllm.exceptions.OpenLLMException: Model type is not supported yet.
How can I…
-
### Describe the bug
Hello,
I would like try openllm offline but I can't.
For my test, I download huggyllama--llama-7b model with another computer with internet and I copy bento home to another c…
-
### Feature request
It would be nice to have the option to use AMD GPUs that support ROCm .
PyTorch seems to support ROCm AMD GPUs on Linux - the following was tested on Ubuntu 22.04.2 LTS with …
-
### Feature request
It would be nice if warning from client should only printed once.
Right now if user have multiple JSON IO with multiple pydantic schema and they use `bentoml.client.Client`, th…
-
### Describe the bug
openllm start facebook/opt-1.3b
It is recommended to specify the backend explicitly. Cascading backend might lead to unexpected behaviour.
Traceback (most recent call last):
…
-
#### Description
Prepare a comprehensive and engaging presentation for 28.11, showcasing our project's progress and future roadmap.
#### Methodology:
1. **Content Gathering**: Compile key achieve…
-
### Describe the bug
Hi there, thanks for providing this brilliant work!
I cannot run Baichuan-13B-Chat model successfully, it said the model ``is not found in BentoML store , you may need to run …
-
### Describe the bug
When I start openllm by run "openllm start facebook/opt-1.3b --backend vllm", it stops starting.
### To reproduce
1. run 'openllm start facebook/opt-1.3b --backend vllm'
2. er…