poppinace / indexnet_matting

(ICCV'19) Indices Matter: Learning to Index for Deep Image Matting
Other
389 stars 90 forks source link

some error use connectivity loss fuction in gpu #10

Open kelisiya opened 4 years ago

kelisiya commented 4 years ago

Im trying to reproduce your paper . The cnn retun a tensor , but the Loss Fuction use numpy; So I use tensor to numpy and calculate loss , the numpy to tensor . I found connectivity loss can't work. How did you deal with it?

poppinace commented 4 years ago

Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

kelisiya commented 4 years ago

Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

It's work , I let your backbone to DIM and use Alpha loss and Grad loss , finally I can train your model .

poppinace commented 4 years ago

@kelisiya Nice, let me know if your model achieves better results:)

kelisiya commented 4 years ago

Some inference questions in Indexnet:The inference code use np.clip , but network return pred(l) isn't [0-1]; Your training loss fuction use cv.normalize and then /255 is right ?  When I don't use your pretrain and training some epoch , the loss doesn't down  and the inference image have some error . I try to use torch.clamp() in network return tensor to calculate loss fuction , or use normal lize but doesn't seem to be effective. Do you know what causes these edges?

poppinace commented 4 years ago

@kelisiya It is normal that the network's output is not bounded by [0, 1] due to the nature of regression. This is why postprocessing is required to eliminate unreasonale outputs. However, you should not use the clip operator or clamp in pytorch during training, because the gradient will be clipped to zero either. This may be the reason why the loss does not decrease. The clip operator should be only applicable in inference.

kelisiya commented 4 years ago

So are you using the cv2.normalize in your training loss fuction ? In other words , If I use sigmoid in return tensor ,it may also be effective?

poppinace commented 4 years ago

I don't use cv.normalize. I also tried sigmoid, but do not find it necessary.

kelisiya commented 4 years ago

So only train the return tensor ,don't use any activation functions.

poppinace commented 4 years ago

Exactly!

kelisiya commented 4 years ago

Thanks for your answer.

kelisiya commented 4 years ago

other questions: 1.how to calculate alpha loss ?the input is alpha or alpha/255?2. The Indexnet input is image+trimap/255 like DIM?

poppinace commented 4 years ago

Of course you should normalize the alpha before calculating the loss. Yes, the concatenated input like DIM.