Closed gillouche closed 4 years ago
@gillouche it seems to be failing when trying to read the json from your compiled model:
ERROR error in DLRModel instantiation TVMError: key 0 is not supported
[bt] (1) /usr/local/dlr/libdlr.so(tvm::runtime::GraphRuntime::Load(dmlc::JSONReader*)+0x1c78) [0xaa5a3d6c]
Can you share the graph.json for your compile model?
Hi @samskalicky,
In the tar.gz created I only have compiled_model.json. If that's what you meant, I attached it compiled_model.json.zip
If not, how can I get it ?
Thanks.
@gillouche can you confirm that you're giving greengrass the s3 location of your compiled model and not the original uncompiled model?
Hi @samskalicky
sorry for the slow response times. I am currently downloading the models from S3, unzip them in /tmp and load them manually. If I remember correctly, I tried both methods (manually download zip file with models from S3 and this procedure.
The project is moving fast and we are not using DLR anymore for the moment so I don't have much time to test again so I am going to close the comment. I will open a new one or reopen this one if I encounter the problem again in the future when we need DLR.
Thanks for your help.
Hello,
I am building a FastAI model with SageMaker Neo for my Raspberry Pi 4 model B. I am trying to deploy it with AWS Greengrass. DLR is directly installed on my RPI4 using this link from this page.
If I connect to my RPI4 with SSH and load the model with the python interpreter, it works.
Unfortunately, the same model downloaded from S3 in the Lambda deployed by AWS greengrass gives me that error.
I am trying to find what this error means to see if I can fix it myself. Would you have any idea of the reason I get this error in the Greengrass Lambda and not the python interpreter ?
Thank you.