Closed naguileraleal closed 1 year ago
i've not used onnx before; does it need a single patch or a batch?
i ask because if you look at the code here:
you'll see that torchsummary is able to provide a very nice summary, but doesn't have the batch dimension listed
i've not used onnx before; does it need a single patch or a batch?
Me neither. I have the same doubt. From this tutorial I gather it needs a single patch.
I suppose the dummy_input
variable should have the same dimensions as the input that goes into the model when running in inference mode. I have not yet found out what these dimensions are, but I suppose I could know with a little logging inside make_output_unet_cmd.py.
These are the relevant lines of code:
patch size should be: Batch x 3 {RGB} x patch_size x patch_size
you can use whatever batch size will fit into GPU memory
it looks like the problem is somewhere else though?
dim = input.dim() - 2 # Number of spatial dimensions.
AttributeError: 'NoneType' object has no attribute 'dim'
this would seem to suggest that input is "None" which is not related to particular size, but instead something like the type?
I spent a little more time debugging the script and the problem arises when the torch.onnx.export()
function executes the forward method of the model.
I'll now attach part of the UNet definition so I can refer to it.
class UNet(nn.Module):
def __init__(self, in_channels=1, n_classes=2, depth=5, wf=6, padding=False,
batch_norm=False, up_mode='upconv' ,concat=True):
"""
Implementation of
U-Net: Convolutional Networks for Biomedical Image Segmentation
(Ronneberger et al., 2015)
https://arxiv.org/abs/1505.04597
Using the default arguments will yield the exact version used
in the original paper
Args:
in_channels (int): number of input channels
n_classes (int): number of output channels
depth (int): depth of the network
wf (int): number of filters in the first layer is 2**wf
padding (bool): if True, apply padding such that the input shape
is the same as the output.
This may introduce artifacts
batch_norm (bool): Use BatchNorm after layers with an
activation function
up_mode (str): one of 'upconv' or 'upsample'.
'upconv' will use transposed convolutions for
learned upsampling.
'upsample' will use bilinear upsampling.
"""
super(UNet, self).__init__()
assert up_mode in ('upconv', 'upsample')
self.padding = padding
self.depth = depth
self.concat = concat
prev_channels = in_channels
self.down_path = nn.ModuleList()
for i in range(depth):
self.down_path.append(UNetConvBlock(prev_channels, 2**(wf+i),
padding, batch_norm))
prev_channels = 2**(wf+i)
self.up_path = nn.ModuleList()
for i in reversed(range(depth - 1)):
self.up_path.append(UNetUpBlock(prev_channels, 2**(wf+i), up_mode,
padding, batch_norm , concat))
prev_channels = 2**(wf+i)
self.last = nn.Conv2d(prev_channels, n_classes, kernel_size=1)
def forward(self, x):
blocks = []
for i, down in enumerate(self.down_path):
x = down(x)
if i != len(self.down_path)-1:
blocks.append(x)
x = F.avg_pool2d(x, 2)
for i, up in enumerate(self.up_path):
x = up(x, blocks[-i-1])
return self.last(x)
In the second for
loop of the forward
method, when i=0
, the call to x = up(x, blocks[-i-1])
returns null
as the value of x
, and so on the next iteration the call is equivalent to x = up(null, blocks[-i-1])
. This is what causes the exception.
The thing is I have no idea on how to solve this.
I'll attach my best_model.pth
in case you want to give it a try yourself.
very strange, but you were able to actually train the model and make the associated output? the challenge is only in the onnx component? may be better to report this issue there?
On 23/03/2023 01:13, naguileraleal wrote:
I spent a little more time debugging the script and the problem arises when the torch.onnx.export() function executes the forward method of the model.
I'll now attach part of the UNet definition so I can refer to it.
class UNet(nn.Module): def init(self, in_channels=1, n_classes=2, depth=5, wf=6, padding=False, batch_norm=False, up_mode='upconv' ,concat=True): """ Implementation of U-Net: Convolutional Networks for Biomedical Image Segmentation (Ronneberger et al., 2015) https://arxiv.org/abs/1505.04597
Using the default arguments will yield the exact version used
in the original paper
Args:
in_channels (int): number of input channels
n_classes (int): number of output channels
depth (int): depth of the network
wf (int): number of filters in the first layer is 2**wf
padding (bool): if True, apply padding such that the input shape
is the same as the output.
This may introduce artifacts
batch_norm (bool): Use BatchNorm after layers with an
activation function
up_mode (str): one of 'upconv' or 'upsample'.
'upconv' will use transposed convolutions for
learned upsampling.
'upsample' will use bilinear upsampling.
"""
super(UNet, self).__init__()
assert up_mode in ('upconv', 'upsample')
self.padding = padding
self.depth = depth
self.concat = concat
prev_channels = in_channels
self.down_path = nn.ModuleList()
for i in range(depth):
self.down_path.append(UNetConvBlock(prev_channels, 2**(wf+i),
padding, batch_norm))
prev_channels = 2**(wf+i)
self.up_path = nn.ModuleList()
for i in reversed(range(depth - 1)):
self.up_path.append(UNetUpBlock(prev_channels, 2**(wf+i), up_mode,
padding, batch_norm , concat))
prev_channels = 2**(wf+i)
self.last = nn.Conv2d(prev_channels, n_classes, kernel_size=1)
def forward(self, x):
blocks = []
for i, down in enumerate(self.down_path):
x = down(x)
if i != len(self.down_path)-1:
blocks.append(x)
x = F.avg_pool2d(x, 2)
for i, up in enumerate(self.up_path):
x = up(x, blocks[-i-1])
return self.last(x)
In the second for loop of the forward method, when i=0, the call to x = up(x, blocks[-i-1]) returns null as the value of x, and so on the next iteration the call is equivalent to x = up(null, blocks[-i-1]). This is what causes the exception. The thing is I have no idea on how to solve this.
I'll attach my best_model.pth in case you want to give it a try yourself.
— Reply to this email directly, view it on GitHubhttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fchoosehappy%2FQuickAnnotator%2Fissues%2F33%23issuecomment-1480412882&data=05%7C01%7Candrew.r.janowczyk%40emory.edu%7Ccec3022bd0e64857b80e08db2b33711f%7Ce004fb9cb0a4424fbcd0322606d5df38%7C0%7C0%7C638151275201766888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=jq64tG%2BNcpMS1rWKQACEE2hlRQlCIHr8sGBPYClHh0Y%3D&reserved=0, or unsubscribehttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACJ3XTB66MN7O6SAJQA43ALW5OIS3ANCNFSM6AAAAAAWAMXK6U&data=05%7C01%7Candrew.r.janowczyk%40emory.edu%7Ccec3022bd0e64857b80e08db2b33711f%7Ce004fb9cb0a4424fbcd0322606d5df38%7C0%7C0%7C638151275201766888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ZRG4w76etJ1EdjDFrY4WxFWz0Ru2%2BBIHf35%2Fq0I68ag%3D&reserved=0. You are receiving this because you commented.Message ID: @.***>
Hi!
There was a bug in my script, specifically in the UNet
class definition. I copy/pasted the structure of the class in the script and must made a mistake during that step.
Importing the UNet class directly from QuickAnnotator's unet
module fix my original issue and I was able to convert the model to ONNX.
I'm closing this issue now. Sorry for the inconvenience, and thanks again for your help.
great, glad you figured it out!
Hi! I've trained a model using QA, and now I'm looking to import it into FastPathology. For that reason, I've downloaded the model from QA's web interface, I'm trying to convert the trained model to ONNX format.
To do that I'm using the following script
When executing this script I get the following error
I'm in doubt of the dimensions in this line
dummy_input = torch.randn((1, 3, 256, 256), requires_grad=True)
. Are they right?Any help is much appreciated!