Closed prakashjayy closed 3 years ago
Thanks for the report. I am trying to fix it. Might take some days. Please be patient.
Thanks @grimoire . I am able to successfully execute after commenting this line. Not sure if .engine file would suffice to deploy the model on deepstream. still testing it.
@grimoire any update?
I found model with hourglass(2 stack, such as cornernet) also have this problem. But still don't know the reason. Sorry. There is a new PR about cpp example. Plan to test engine on it.
Model saving failed on 2070s, but success on 2080ti. Might related to gpu memory size. But still don't know why. Have you try convert without docker?
torch.save
and setting --save-engine true
. we were able to run the app using deepstream too. docker
and without docker
.closing this issue for now.
I have a similar issue, @prakashjayy where did you set the--save-engine true
setting?
I am trying to convert model from mmdetection2tensorrt using the Dockerfile provided on TX2 machine but getting Memory error issues
enviroment:
we have made several changes to Dockerfile to be able to make it run jetson tx2 device.