Closed jjmachan closed 3 years ago
/test-e2e-deploy sha=564aaa8
hey @NaxAlpha can you try out this fix and let me know if it works
sure let me try this!
unfortunately I am getting this error now:
running the docker image without --gpus all
locally gives the same error.
this could be because of some nvidia-container-runtime but will need some time to figure this out unfortunately. Which instance_type are you using?
For this service: ml.g4dn.xlarge
While I could deploy this service last week on GPU, somehow, it is not working right now. Also, I am not sure if it is related to the current issue, but I will try multipart on another service deployed on a CPU instance. And let you know!
Thanks @NaxAlpha!
The API Gateway is still not configured properly for the ImageInput handler. Can you try testing it with the FileInput handler too? Really sorry for these inconveniences but will get these ironed out ASAP.
Ok just tested a CPU service. The fix is working perfectly!
BTW CPU service looks something like this: (It was also not working previously because of the same issue)
class Model(bentoml.BentoService):
@bentoml.api(
input=MultiFileInput(
input_names=[
"image1",
"image2",
"config",
]
),
batch=False,
)
def predict(self, image1: FileLike, image2: FileLike, config: FileLike):
...
We have figured out a better approach to this problem here #20 hence closing this.
This is supposed to fix https://github.com/bentoml/BentoML/issues/1822