dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.78k stars 2.98k forks source link

How to deploy my own UNET model on Jetson nano #1077

Closed sunmmerday closed 3 years ago

sunmmerday commented 3 years ago
I've trained an UNET model to segment lane lines. The FPS of this model running on Jetson nano with Pytorch is 9. Then I converted the model into an onnx file and tried to modify the code based on the segnet.py in the jetson-inference example.
I got the . Engine file. But when I tried to modify the network input directly, I found that I could not complete the image preprocessing.The core code of this model running directly on Python as follows:

transform = T.Compose([ T.Resize(InputImgSize), T.ToTensor(), T.Normalize(mean=[0.5], std=[0.5]), ]) Unet.eval() torch.set_grad_enabled(False)

while 1: ret, Img = Video.read() if ret == True: WarpedImg = cv2.warpPerspective(Img, H, (500, 500)) WarpedImg_af = Image.fromarray(cv2.cvtColor(WarpedImg, cv2.COLOR_BGR2RGB)) transImg = transform(WarpedImg_af) transImg = torch.unsqueeze(transImg, dim=0) transImg = transImg.float().to(Device) print(transImg.shape) OutputImg = Unet(transImg) OutputImg = OutputImg.cpu().numpy()[0, 0] OutputImg = (OutputImg * 255).astype(np.uint8) cv2.imshow(OutputImg)

When I finish image preprocessing in opencv (for example, I use translate (2,0,1) to convert HWC to CHW), I find that cudaFromNump function uses isbgr = true to get cudaMemory type. But the net. Process function only accepts cudaImage type. So I'd like to ask if using segnet.py file can get the same results that I run directly on pytorch?If not, how to infer my own model on nano? Is there any reference document?

dusty-nv commented 3 years ago

If it is a PyTorch model, I would recommend using torch2trt tool to run it with TensorRT in Python: https://github.com/NVIDIA-AI-IOT/torch2trt

Then you wouldn't have to worry about adding the pre/post-processing for UNet to jetson-inference.