Open eddywart opened 4 years ago
Hi @eddywart
The first message you saw /home/pi/neo-ai-dlr/src/dlr_tvm.cc:71: No metadata found
is just a warning and can be safely ignored.
Based on the errors you saw in your second log file, you are passing a directory that doesn't exist to DLRModel(). It appears /mlmodel
directory doesn't exist. Is this something that greengrass is supposed to set up? Maybe this is a bug in green grass.
[2020-07-07T11:48:03.357+08:00][FATAL]-lambda_runtime.py:140,Failed to import handler function "inference.handler" due to exception: model_path /mlmodel doesn't exist
[2020-07-07T11:48:03.357+08:00][FATAL]-lambda_runtime.py:380,Failed to initialize Lambda runtime due to exception: model_path /mlmodel doesn't exist
[2020-07-07T11:48:04.56+08:00][ERROR]-__init__.py:1037,2020-07-07 11:48:04,437 ERROR error in DLRModel instantiation model_path /mlmodel doesn't exist
[2020-07-07T11:48:04.56+08:00][ERROR]-Traceback (most recent call last):
[2020-07-07T11:48:04.56+08:00][ERROR]- File "/usr/lib/python3/dist-packages/dlr/api.py", line 82, in __init__
[2020-07-07T11:48:04.56+08:00][ERROR]- self._impl = DLRModelImpl(model_path, dev_type, dev_id)
[2020-07-07T11:48:04.56+08:00][ERROR]- File "/usr/lib/python3/dist-packages/dlr/dlr_model.py", line 101, in __init__
[2020-07-07T11:48:04.56+08:00][ERROR]- raise ValueError("model_path %s doesn't exist" % model_path)
[2020-07-07T11:48:04.56+08:00][ERROR]-ValueError: model_path /mlmodel doesn't exist
I am trying to perform machine learning on the edge using a sagemaker neo model as an AWS greengrass deployment package, as per the tutorial here: https://docs.aws.amazon.com/greengrass/latest/developerguide/ml-dlc-console.html
I installed the DLR package for raspberry pi model 3b+ using the pre-built wheel here: https://neo-ai-dlr.readthedocs.io/en/latest/install.html
While running the following set of code, it seems that the inference is successful (test-dlr.log), but the following error occurs: /home/pi/neo-ai-dlr/src/dlr_tvm.cc:71: No metadata found
test-dlr.log
After deployment on a Lambda function through AWS greengrass, the same error is observed in the log file, but the inference did not successfully run (optimizedImageClassification.log).
optimizedImageClassification.log
What can I do to resolve this error?