Open fclof opened 3 years ago
Hello, I am planning to buy a NVIDIA Jetson AGX and run a trained deelplabv3+ model on it. However, when I search on google I found this post. Did you make it work? Or do you have any suggestion on how to make it work?
I didn't have enough time to get any model to work in the Jetson unfortunately. It was an intern project for which I had about 10 weeks. With that said, at the time converting TF2 models to Jetson was difficult as every converter wanted to use TF1.
The issue you've raised looks like it might get fixed with a future release of TensorRT so look out for that. If you're using the MobileNet backbone I forsee no issues, but on the Xception backbone, even on the V100 a lack of VRAM was an issue, but also at the time mixed precision was buggy. I suggest training an Xception backbone with mixed precision, combined then with an updated TensorRT or ONNX model conversion may work. You may have to really dial down the precision for the model to work on the Jetson... Once the model is supported of course.
Although in saying that, I suspect many other models like those based on EfficientNets may end up being more effective and produce better results anyway.
Hello, I am planning to buy a NVIDIA Jetson AGX and run a trained deelplabv3+ model on it. However, when I search on google I found this post. Did you make it work? Or do you have any suggestion on how to make it work?