wang-xinyu / tensorrtx

Implementation of popular deep learning networks with TensorRT network definition API
MIT License
7.04k stars 1.78k forks source link

Convert single class .pt custom model to .wts model error #1550

Closed shahla-ai closed 4 months ago

shahla-ai commented 4 months ago

hello @wang-xinyu i trained my yolov8m on my custom single class dataset i am trying to optimize it with tensorrt , but i have a problem in converting .pt model into .wts using gen_wts.py script

File "gen_wts.py", line 40, in <module> model = torch.load(pt_file, map_location=device)['model'].float() # load to FP32 ModuleNotFoundError: No module named 'ultralytics.yolo'

Any help ?

lgkkey commented 4 months ago

you can try use YOLO to load pt, like : from ultralytics import YOLO model = YOLO(pt_file) device = torch.device('cpu') model.to(device).float() model = model.ckpt['model'].float()

use this ”model“ replace model = torch.load(pt_file, map_location=device)['model'].float() ,

shahla-ai commented 4 months ago

@lgkkey Thank you it worked Big thanks