Open anrahman4 opened 2 years ago
I have found a solution:
Replace dummy_input = torch.randn(1, input_size, requires_grad=True) with dummy_input = torch.randn(1, 3, 32, 32, requires_grad=True)
torch.randn is used to create a randomized tensor to feed into the model. The first parameter is the batch size, it is set to 1 here but I believe it can be set to other values as well. The second parameter is number of channels. In this tutorial, the first convolutional neural network layer has 3 channels, so it is expecting an input tensor to have the same value, and thus it is set to 3.
Lastly, the last two parameters are height and width. Since it is using images in the CIFAR10 dataset, the height and width are both 32.
If you then run the Convert_ONNX function, you should get the proper output.
I can confirm this solution works :)
Thanks @anrahman4 , glad I found this ;-) Anyone has got leads on how to prepare their own data to replace that from this tutorial?
[Enter feedback here] I am trying to follow this guide, and I wanted to export the PyTorch model or the .pth file to .onnx. However, when constructing the dummy_input, one of the paramters is input_size, which is not defined anywhere in the tutorial as a variable.
As a result, I am unable to convert the .pth file to a onnx file. Please correct the code so that I can properly do the conversion.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.