Closed sunmmerday closed 3 years ago
If it is a PyTorch model, I would recommend using torch2trt tool to run it with TensorRT in Python: https://github.com/NVIDIA-AI-IOT/torch2trt
Then you wouldn't have to worry about adding the pre/post-processing for UNet to jetson-inference.
transform = T.Compose([ T.Resize(InputImgSize), T.ToTensor(), T.Normalize(mean=[0.5], std=[0.5]), ]) Unet.eval() torch.set_grad_enabled(False)
while 1: ret, Img = Video.read() if ret == True: WarpedImg = cv2.warpPerspective(Img, H, (500, 500)) WarpedImg_af = Image.fromarray(cv2.cvtColor(WarpedImg, cv2.COLOR_BGR2RGB)) transImg = transform(WarpedImg_af) transImg = torch.unsqueeze(transImg, dim=0) transImg = transImg.float().to(Device) print(transImg.shape) OutputImg = Unet(transImg) OutputImg = OutputImg.cpu().numpy()[0, 0] OutputImg = (OutputImg * 255).astype(np.uint8) cv2.imshow(OutputImg)
When I finish image preprocessing in opencv (for example, I use translate (2,0,1) to convert HWC to CHW), I find that cudaFromNump function uses isbgr = true to get cudaMemory type. But the net. Process function only accepts cudaImage type. So I'd like to ask if using segnet.py file can get the same results that I run directly on pytorch?If not, how to infer my own model on nano? Is there any reference document?