For taxi mlflow model usecase - online deployment, I tried using Kubernetes as compute resource
As per the Microsoft docs, created aks, configured extensions, attached in ml workspace. Then I tried online deployment via cli v2, where I am facing the below issues - "InferencingClientCallFailed"
When deploying via ui, for mlflow models also, it is asking to upload scoring script and environment [ which is not incase of managed instance]
So, whether we need to manually add / configure scoring script and environment details for mlflow models when we use Kubernetes?
Hi,
For taxi mlflow model usecase - online deployment, I tried using Kubernetes as compute resource
As per the Microsoft docs, created aks, configured extensions, attached in ml workspace. Then I tried online deployment via cli v2, where I am facing the below issues - "InferencingClientCallFailed"
When deploying via ui, for mlflow models also, it is asking to upload scoring script and environment [ which is not incase of managed instance]
So, whether we need to manually add / configure scoring script and environment details for mlflow models when we use Kubernetes?