Open BangDaeng opened 8 months ago
hi I think This seem to be an intentional feature of bentoml.
there is no way to check if mlflow model use gpu in bentoml
you can see the more detail in below link
https://github.com/bentoml/BentoML/blob/main/src/bentoml/_internal/frameworks/mlflow.py#L246
# https://github.com/bentoml/BentoML/blob/main/src/bentoml/_internal/frameworks/mlflow.py#L246
class MLflowPyfuncRunnable(bentoml.Runnable):
# The only case that multi-threading may not be supported is when user define a
# custom python_function MLflow model with pure python code, but there's no way
# of telling that from the MLflow model metadata. It should be a very rare case,
# because most custom python_function models are likely numpy code or model
# inference with pre/post-processing code.
SUPPORTED_RESOURCES = ("cpu", )
SUPPORTS_CPU_MULTI_THREADING = True
...
have you try this?
class TestSentenceBert(_sbert_runnable): # override
SUPPORTED_RESOURCES = ("gpu",) # <--- need to add "gpu" force to find gpu
SUPPORTS_CPU_MULTI_THREADING = True
def __init__(self):
super().__init__()
@bentoml.Runnable.method(batchable=True, batch_dim=0)
def predict(self, sentences: List[str]):
output = super().predict(sentences)
return output
Describe the bug
i am save bento with mlflow (sentence transformers)
and below is service.py
how can i use gpu~? model not found and i want model.to("cuda:0")
To reproduce
No response
Expected behavior
No response
Environment
newest version