Open kelisiya opened 4 years ago
Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.
Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.
It's work , I let your backbone to DIM and use Alpha loss and Grad loss , finally I can train your model .
@kelisiya Nice, let me know if your model achieves better results:)
Some inference questions in Indexnet:The inference code use np.clip , but network return pred(l) isn't [0-1]; Your training loss fuction use cv.normalize and then /255 is right ? When I don't use your pretrain and training some epoch , the loss doesn't down and the inference image have some error . I try to use torch.clamp() in network return tensor to calculate loss fuction , or use normal lize but doesn't seem to be effective. Do you know what causes these edges?
@kelisiya It is normal that the network's output is not bounded by [0, 1] due to the nature of regression. This is why postprocessing is required to eliminate unreasonale outputs. However, you should not use the clip operator or clamp in pytorch during training, because the gradient will be clipped to zero either. This may be the reason why the loss does not decrease. The clip operator should be only applicable in inference.
So are you using the cv2.normalize in your training loss fuction ? In other words , If I use sigmoid in return tensor ,it may also be effective?
I don't use cv.normalize. I also tried sigmoid, but do not find it necessary.
So only train the return tensor ,don't use any activation functions.
Exactly!
Thanks for your answer.
other questions: 1.how to calculate alpha loss ?the input is alpha or alpha/255?2. The Indexnet input is image+trimap/255 like DIM?
Of course you should normalize the alpha before calculating the loss. Yes, the concatenated input like DIM.
Im trying to reproduce your paper . The cnn retun a tensor , but the Loss Fuction use numpy; So I use tensor to numpy and calculate loss , the numpy to tensor . I found connectivity loss can't work. How did you deal with it?