Many thanks for your great work to attempt to reproduce the DSNet paper!
After reading your code, in particular the dsnet.py file, I notice 4 potential issues :
Vgg-18 ten first layers should be no trainable (not clear in the paper)
There is no bias for the dilated convolution layer inside DDCB block. According to the paper, it is similar to DenseASPP layer implementation example
At the end of DDCB block, there is no RELU layer according to the paper
Inside DDCB block again, the last concat should include x1_raw, ie. x3 = torch.cat([x, x1_raw, x2_raw, x3_raw], 1)
I think there is a typo in the figure 2 of the paper, the DDCB paragraph overlines a full connection of dilation layers.
I am training a network with these changes on ShangaiTech B dataset, let see if I retrieve paper results....
Many thanks for your great work to attempt to reproduce the DSNet paper! After reading your code, in particular the
dsnet.py
file, I notice 4 potential issues :x3 = torch.cat([x, x1_raw, x2_raw, x3_raw], 1)
I think there is a typo in the figure 2 of the paper, the DDCB paragraph overlines a full connection of dilation layers.I am training a network with these changes on ShangaiTech B dataset, let see if I retrieve paper results....