-
### Describe the bug
I pushed a working model to our local Yatai server. However, I can not pull it again, as the latest model is not found on remote Bento store.
If I add the version code manuall…
-
### Describe the bug
I am hosting a flant5 model on CPU and I am getting the above errpr
### To reproduce
openllm start google/flan-t5-small --port 3000 --do-not-track --api_workers 17
### Logs
…
-
### Describe the bug
Followed this [quickstart](https://docs.bentoml.org/en/latest/quickstarts/deploy-a-large-language-model-with-openllm-and-bentoml.html#), but encountered a typeError when trying t…
-
I am trying to deploy a trained MONAI RetinaNet 3D pytorch model using BentoML. I am getting errors related to the lack of a __getstate__ attribute for RetinaNetDetector during a cloudpickle process.…
-
### Describe the bug
When loading the mistral and Llama model on T4 GPU, I'm getting this error
raise openllm.exceptions.OpenLLMException(f'Failed to initialise vLLMEngine due to the following er…
-
When I run 'bentoml serve', I encounter the following error. How can I resolve it?
File "/root/zhh6/Replace-Anything/./interface.py", line 46, in create_block
image_input = gr.inputs.Image(shape…
-
### Describe the bug
When attempting to run google/flan-t5-large
openllm start google/flan-t5-large
it gave me
ValueError: Model architectures ['T5ForConditionalGeneration'] are not supported for…
-
### Describe the bug
I'm trying some examples but they don't work for me, not sure if it's due to configuration issue on my side or README degradation, eg start llm server or check langchain integrat…
-
### Describe the bug
I was trying to serve MusicGen huggingface model with BentoML. the input of API was json, output was multipart with np.array and text. but it makes something mysterious, which …
-
Following the installation instructions, when I create my first BentoDeployment I run into this error and the BentoDeployment is stuck as `Available: False`:
```
yatai-deployment-8586fcd67c-vtfsm ma…
tmyhu updated
11 months ago