Closed ghost closed 5 years ago
official version: Andrew-Qibin
The mainly gap I think is caused by learning stragedy. (Like init and learning rate adjust.)
when compare with the orginal paper ,how about the difference of the Short Connections , the side output 1 2 3 4 5 6 ?
Follow picture:
Inference: And the paper use Z_fuse=2,3,4; however, in here, I use all the outputs
what is the meaning ? extra = {'dss': [(64, 128, 3, [8, 16, 32, 64]), (128, 128, 3, [4, 8, 16, 32]), (256, 256, 5, [8, 16]), (512, 256, 5, [4, 8]), (512, 512, 5, []), (512, 512, 7, [])]} connect = {'dss': [[2, 3, 4, 5], [2, 3, 4, 5], [4, 5], [4, 5], [], []]} k = [out[x] for x in [1, 2, 3, 6]] thanks
what is the difference between your code and the orginal paper :Deeply Supervised Salient Object Detection with Short Connections ? thanks