Open jsynnott opened 3 years ago
Just an update. I killed the process as it didn't appear to be doing anything. I have managed to complete the model optimisation by:
with torch.cuda.device(0):
model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workplace_size=1<25)
Hi even I have this problem. It is taking too long and the system is stuck. I also got a low memory warning on the Jetson nano.What to do???
Hi, I think you should export the torch model to onnx first then convert onnx to tensorrt engine.
Hi everyone, I have recently setup trt_pose on a fresh Jetson Nano, all requirements installed including PyTorch v1.6 and torchvision v0.7.0 on Jetpack 4.4.
I am trying to run the live demo in jupyter notebook. The following line has taken 12 hours so far:
model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workplace_size=1<25)
Is this normal? Surely it can't be?