Open nschenone opened 2 years ago
The following resolves the issue:
serving_fn.set_env("MLRUN_DBPATH", "http://host.docker.internal:8080/")
serving_fn.spec.volumes = [{'name': 'mlrun-data-mount', 'hostPath': {'path': '/Users/nick/mlrun-data'}}]
serving_fn.spec.volume_mounts = [{'name': 'mlrun-data-mount', 'mountPath': '/home/jovyan/data'}]
This is assuming running in Jupyter and on Mac. The paths should be updated accordingly
MLRun Version checks
[X] I have checked that this issue has not already been reported.
[X] I have confirmed this bug exists on the latest version of the MLRun Kit.
Reproducible Example
Issue Description
When deploying a serving function on the docker-compose installation, the function will immediately crash because it cannot find the model. The docker container crashes, but the MLRun/Nuclio UI do not reflect this and it looks like it is still running
Expected Behavior
The serving function should not crash and/or the MLRun/Nuclio UI should update to reflect the crash
Python Version
3.8.8
MLRun Version
1.0.2
Additional Information
Original discussion from MLOps Live Slack Channel. The solution has been found and will be attached, however it still needs to be added to the source code