hbilen / WSDDN

Weakly Supervised Deep Detection Networks (CVPR 2016)
148 stars 53 forks source link

Gradient Fix, Swich to previous loss formulation #4

Closed senthilps8 closed 6 years ago

senthilps8 commented 7 years ago

The gradient computation was still following your previous formulation of the loss: p(y_c=1) euclidean_loss and not p(y_c=1)^2 euclidean_loss So I'm assuming that's the formulation that works.

The gradient had a tiny bug which probably occurs rarely. obj.gtIdx{c1} could overlap with obj.boxIdx{c2}, so the gradient can't be set independently for each c as: derInputs{1}(:,:,:,obj.boxIdx{c}) = bsxfun(@minus,inputs{1}(:,:,:,obj.boxIdx{c}),inputs{1}(:,:,:,obj.gtIdx{c}));

I think this PR should fix it. PS: Haven't tested this.

hbilen commented 7 years ago

Hi Senthil, Sorry I forgot writing to you before. Thanks a lot for the patch. I will try to see which formulation gives the best result and then maybe add an option to choose among them.

senthilps8 commented 7 years ago

No worries. Hope the results improve!