WongKinYiu / yolor

implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks (https://arxiv.org/abs/2105.04206)
GNU General Public License v3.0
1.98k stars 524 forks source link

Can I export the .pt model to use with Tensorflow? #128

Closed chandralegend closed 2 years ago

Deadpool5549 commented 2 years ago

Hi, did you find a way to do that?, Thanks

otsebriy commented 2 years ago

@Deadpool5549

I did that in the jupyter notebook, but there is one problem, that to predict some images you need to specify input shape, but when you change it all processing will be broken.

Tensorflow shape input is - [1, 3, 384, 640], right now it is working with large images, but not with low resolution. So I am trying to find a way how to specify input shape to - [1, 3, None, None] for predicting all images.

from models.models import Darknet
import torch

cfg = "../yolor/cfg/yolor_csp_x.cfg"
weights = "./checkpoints/to_server/yolor_csp_x_star.pt"
device = torch.device("cuda:0")
batch_size = 1
img_size = [384, 640]

img = torch.zeros((batch_size, 3, *img_size)).to(device)  # image size(1,3,320,192) iDetection

# Load PyTorch model
model = Darknet(cfg)
model.load_state_dict(
    torch.load(weights, map_location=device)["model"]
)
model.to(device).eval()

y = model(img, augment=False)  # dry run

try:
    import onnx

    print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
    f = weights.replace('.pt', '.onnx')  # filename
    model.fuse()  # only for ONNX
    torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'], output_names=['classes', 'boxes'] if y is None else ['output'])

    # Checks
    onnx_model = onnx.load(f)  # load onnx model
    onnx.checker.check_model(onnx_model)  # check onnx model
    # print(onnx.helper.printable_graph(onnx_model.graph))  # print a human readable model
    print('ONNX export success, saved as %s' % f)
except Exception as e:
    print('ONNX export failure: %s' % e)

model_path = "./checkpoints/to_server/yolor_csp_x_star.onnx"

import onnx
from onnx_tf.backend import prepare

onnx_model = onnx.load(model_path)
tf_rep = prepare(onnx_model)
tf_rep.export_graph(f'{model_path.split(".onnx")[0]}/check/')

Maybe you have any suggestions?

otsebriy commented 2 years ago

I found out how to fix shapes. Just add the argument below to torch.onnx.export():

dynamic_axes={"images": {2: "height", 3: "width"}, "output": [1]}
otsebriy commented 2 years ago

If you still need to use converted on Tensorflow serving model here is my notes:

  1. dynamic_shape didn't allow me to get embeddings for the low-resolution images.
  2. If you want to get predictions for images of different sizes then you need to work with the letterbox function, apply changes below:
    (it is about the function parameters)
    scaleFill -> True
    auto -> False

    Then you will be able to get predictions for different images, but it will be worse than with the auto -> True parameter.

zhao-lun commented 2 years ago

@otsebriy Hi, how was the performance after the conversion ? [Inference speed & mAP]

otsebriy commented 2 years ago

@otsebriy Hi, how was the performance after the conversion ? [Inference speed & mAP]

Hi, the performance was the same as on Pytorch.