Closed Flock1 closed 5 years ago
You are using wrong values to initialize BoxConv2d
. I've documented the constructor, please run help(BoxConv2d)
to see docstrings.
Hi,
Here's what it says:
Input : `(batch_size) x (in_planes) x (h) x (w)`
As you can see above, the box convolution
layer has the same number of channels as convolution layer, the batch size is one (as in the mnist code). I've tried different values for w
and h
but I get the same error.
What do you suggest?
the box convolution layer has the same number of channels as convolution layer
batch size is one (as in the mnist code)
You seem to be confusing batch size, number of input channels and number of output channels.
I urge you to read carefully the rest of help(BoxConv2d)
. It describes all constructor parameters. in_planes
and num_filters
are strictly defined there.
Hi,
it seems to have started working when I set the following parameters:
nn.Conv2d(3, 64, 3, padding=1),
BoxConv2d(64, 1, 3, 3)
According to the constructor, the following are the arguments:
in_planes: int
| Number of channels in the input image (as in Conv2d).
| num_filters: int
| Number of filters to apply per channel (as in depthwise Conv2d).
According to the architecture, the Conv2D
input channel is 3 and output channel is 64. So is this why the input channel to box concolution layer is 64? Since we have added this to the nn.Sequential
, so the output of the Conv2D
will go into BoxConv2D
.
Am I thinking correctly here?
Also, here are the other things I have tried:
1) I tried to change the arguments for BoxConv2D
to (64, 3, 3, 3)
.
I get the following:
RuntimeError: Given groups=1, weight of size [64, 64, 3, 3], expected input[1, 192, 105, 252] to have 64 channels, but got 192 channels instead
This is probably becase the output channels of BoxConv2D
layer is 192 but the next Conv2D
layer expects 64 channels.
2) Something like your mnist.py
# conv1
self.conv_B = BoxConv2d(64, 1, 3, 3)
self.conv1 = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, 3, padding=1),
#BoxConv2d(1, 64, 28, 28),
nn.ReLU(inplace=True),
)
And then
conv1 = self.conv1(self.conv_B(x))
I again get the error related to parameters. So is there any way to set parameters so that it works for two convolution layers? Or do I have to set parameters for every layer?
Here's one more thing I've tried:
# conv1
self.conv_B = BoxConv2d(3, 64, 3, 3)
self.conv1 = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, 3, padding=1),
#BoxConv2d(1, 64, 28, 28),
nn.ReLU(inplace=True),
)
I get the following:
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 192, 252, 105] to have 3 channels, but got 192 channels instead
Maybe you mistakenly switched order and actually meant to do this : conv1 = self.conv_B(self.conv1(x))
I tried that as well. I'm getting the same parameters error
Sorry, the number of channels is really trivial to balance. You only have to
in_planes
same as in Conv2d
, num_filters
is how many times the number of channels will grow.
Hi,
it seems to have started working when I set the following parameters:
nn.Conv2d(3, 64, 3, padding=1), BoxConv2d(64, 1, 3, 3)
According to the constructor, the following are the arguments:
in_planes: int | Number of channels in the input image (as in Conv2d). | num_filters: int | Number of filters to apply per channel (as in depthwise Conv2d).
According to the architecture, the
Conv2D
input channel is 3 and output channel is 64. So is this why the input channel to box concolution layer is 64? Since we have added this to thenn.Sequential
, so the output of theConv2D
will go intoBoxConv2D
.Am I thinking correctly here?
Also, here are the other things I have tried:
- I tried to change the arguments for
BoxConv2D
to(64, 3, 3, 3)
. I get the following:RuntimeError: Given groups=1, weight of size [64, 64, 3, 3], expected input[1, 192, 105, 252] to have 64 channels, but got 192 channels instead
This is probably becase the output channels of
BoxConv2D
layer is 192 but the nextConv2D
layer expects 64 channels.
- Something like your
mnist.py
# conv1 self.conv_B = BoxConv2d(64, 1, 3, 3) self.conv1 = nn.Sequential( nn.Conv2d(3, 64, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(64, 64, 3, padding=1), #BoxConv2d(1, 64, 28, 28), nn.ReLU(inplace=True), )
And then
conv1 = self.conv1(self.conv_B(x))
I again get the error related to parameters. So is there any way to set parameters so that it works for two convolution layers? Or do I have to set parameters for every layer?
Thanks. Also, am I unbderstanding the functioning properly here?
If you mean this, then yes, it's correct:
According to the architecture, the Conv2D input channel is 3 and output channel is 64. So is this why the input channel to box concolution layer is 64? Since we have added this to the nn.Sequential, so the output of the Conv2D will go into BoxConv2D.
Am I thinking correctly here?
Hey,
I am trying to implement box convolution for HED (Holistically-Nested Edge Detection) which uses VGG architecture. Here's the architecture with box convolution layer:
I get the following error:
RuntimeError: BoxConv2d: all parameters must have as many rows as there are input channels (box_convolution_forward at src/box_convolution_interface.cpp:30)
Can you help me with this?