pkuCactus / BDCN

The code for the CVPR2019 paper Bi-Directional Cascade Network for Perceptual Edge Detection
MIT License
345 stars 75 forks source link

why stop gradients on upsample #8

Open shoutOutYangJie opened 5 years ago

shoutOutYangJie commented 5 years ago

o1, o2, o3, o4, o5 = s1.detach(), s2.detach(), s3.detach(), s4.detach(), s5.detach() o11, o21, o31, o41, o51 = s11.detach(), s21.detach(), s31.detach(), s41.detach(), s51.detach()\ why do you use detach here? you mean that VGG's parameter is freezen?

pkuCactus commented 5 years ago

No, we don't stop the gradient of upsample. we just stop the gradient to the previous predictions which we suppose the prediction is the proper edge that layer predict.

yxchng commented 5 years ago

@pkuCactus Is it necessary to detach()? What will happen if we do not detach()?