dk-liang / FIDTM

[IEEE TMM] Focal Inverse Distance Transform Maps for Crowd Localization
MIT License
169 stars 41 forks source link

ONNX giving wrong output #19

Closed kHarshit closed 2 years ago

kHarshit commented 2 years ago

I've converted FIDTM model to onnx using following logic, but the output from onnx is wrong.

class MyNet(nn.Module):
    """Add maxpool for postprocessing"""

    def __init__(self):
        super().__init__()

    def forward(self, x):
        output = nn.functional.max_pool2d(x, (3, 3), stride=1, padding=1)
        return x, output

model = get_seg_model()
model = nn.Sequential(model, MyNet())
model = nn.DataParallel(model, device_ids=[0])
... load model weights (logic similar to video_demo.py)
batch_size = 1  # just take random number
dummy_input = torch.randn(batch_size, 3, 540, 960)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Using', device)
dummy_input = dummy_input.to(device)
model.eval()
model = model.cuda()

torch.onnx.export(model.module,               # model being run
                dummy_input,                         # model input (or a tuple for multiple inputs)
                "crowd_fidtm_model.onnx",   # where to save the model (can be a file or file-like object)
                export_params=True,        # store the trained parameter weights inside the model file
                opset_version=11,          # the ONNX version to export the model to
                do_constant_folding=True,  # whether to execute constant folding for optimization
                input_names = ['input_1'],   # the model's input names
                output_names = ['output_1', 'output_2'], # the model's output names
                dynamic_axes={'input_1' : {0 : 'batch_size'},    # variable length axes
                                'output_1' : {0 : 'batch_size'},
                                'output_2' : {0 : 'batch_size'}})

But after loading this onnx model, the output is wrong.

In fact, the onnx model gives almost the same values for every input image. Is it happening due to if ... else blocks in the model? I'm not sure if model is getting converted correctly!

KunMengcode commented 1 year ago

Have you followed the following steps image I saw that you added a maximum pooling with a convolutional kernel size of 3 steps and a padding of 1 after ONNX. Have you performed any other subsequent processing?

KunMengcode commented 1 year ago

Sorry, I haven't put this issue into practice the other day. I also encountered the same problem earlier. I think a certain process was lost during export. At first, I used the same method as you to export, but I did not infer the correct result. At this point, I believe there may be an error in the flow of the diagram. Later, I used torch.onnx.export to export the model, but there were still errors.