Hi,
thanks for this repository! So far, SAM2UNet performs really well in my experiments.
I just came across some implementation detail that confused me a bit: The BasicConv block, of which sequences are used in the RBF module, has a ReLU child module but it is never used:
class BasicConv2d(nn.Module):
def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_planes, out_planes,
kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, bias=False)
self.bn = nn.BatchNorm2d(out_planes)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
# SHOULD THERE BE `x = self.relu(x)` HERE ?
return x
Was the activation omitted intentionally and, if so, what is the reason?
Hi, we exactly follow the RFB_modified design in PraNet and other popular salient/camouflaged object detection networks. For the original RFB in RFBNet, the ReLU is partially disabled. Unfortunately, the PraNet paper did not mention the motivation for this design difference about ReLU.
Hi, thanks for this repository! So far, SAM2UNet performs really well in my experiments.
I just came across some implementation detail that confused me a bit: The
BasicConv
block, of which sequences are used in the RBF module, has aReLU
child module but it is never used:Was the activation omitted intentionally and, if so, what is the reason?