wang-xinyu / tensorrtx

Implementation of popular deep learning networks with TensorRT network definition API
MIT License
7.04k stars 1.78k forks source link

Yolov8 creating wts fails for newly trained models. #1545

Closed rgkannan676 closed 5 months ago

rgkannan676 commented 5 months ago

Running the below command to create wts file fails. I am trying to convert the segmentation model selected best during the training and getting below error.

 python gen_wts.py -w best.pt -o best.wts -t seg

error :

File "C:\Users\tensorrtx\yolov8\gen_wts.py", line 40, in <module>
    model = torch.load(pt_file, map_location=device)['model'].float()  # load to FP32
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'float'

I also checked on the pre-trained model, it works OK but fails for custom-trained models. Can you please advise what can be done here?

wang-xinyu commented 5 months ago

Can you try to find out why your custom model is different from the pre-trained?

rgkannan676 commented 5 months ago

I trained using the command provided by Ultralytics and didn't make any other changes

yolo detect train data=custome_data.yaml model=yolov8s-seg.pt epochs=100 imgsz=1024

but the obtained model's 'model' key when loaded using 'torch.load(pt_file, map_location=device)['model']' is None.

rgkannan676 commented 5 months ago

I loaded the model using YOLO and assigned the model from that to torch load solved the issue. I need to check the rest of the steps if this is ok.

from ultralytics import YOLO

#load yolo model to get 'model' details
yolo_model = YOLO(pt_file)
# Load model
model_loaded = torch.load(pt_file, map_location=device) # in this load 'model' is None , not sure why?
if model_loaded['model'] is None: #if None, will load model from YOLO load.
    model_loaded['model'] = yolo_model.model

model =model_loaded['model'].float()  # load to FP32