Closed ckcraig01 closed 4 years ago
DBFace is a fully convolutional network, so when exporting onnx, the resolution is 32x32, and the input resolution will be modified as needed during inference. So 32x32 has nothing to do with the trained 800x800
Dear Author:
Thank you for the help!
Dear Author:
Thank you very much.
The dummy is set to dummy = torch.zeros((1, 3, 32, 32)).cuda(): https://github.com/dlunion/DBFace/blob/1b408dbc37ed5b235c619b52dc3bec9276fd38f9/train/small/onnx.py#L34
However, the input size of model seems to be, 800x800:
https://github.com/dlunion/DBFace/blob/1b408dbc37ed5b235c619b52dc3bec9276fd38f9/train/small/train-small-H-keep12-noext-ignoresmall2.py#L19
Not sure if you mean that we shall adjust the model input size (for example 512x512) and re-train the model, then use the very size (1,3,512,512) to generated the final onnx. Thank you very much.