MarcoForte / FBA_Matting

Official repository for the paper F, B, Alpha Matting
MIT License
464 stars 95 forks source link

Question: Normal Training Information #26

Open zoezhou1999 opened 4 years ago

zoezhou1999 commented 4 years ago

Hi, recently, I am reproducing your project and experiencing debugging my reimplementation. Could I ask which epoch of the training does the evaluation metrics of composite 1k datasets reach the not-bad level to help me tell if the current training functions normally or needs further adjustment? Thank you so much!

MarcoForte commented 4 years ago

Hi, I don't have the results on hand right now. In the paper we report very good results around epoch 20, so I suspect the results are decent around epoch 10. In my experience matting models take a long time to converge for fine structures like hair and transparency.

On Thu, Aug 20, 2020 at 8:31 AM zoezhou1999 notifications@github.com wrote:

Hi, recently, I am reproducing your project and experiencing debugging my reimplementation. Could I ask which epoch of the training does the evaluation metrics of composite 1k datasets reach the not-bad level to help me tell if the current training functions normally or needs further adjustment? Thank you so much!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/MarcoForte/FBA_Matting/issues/26, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAED5MU2F2CYZESLUVMKUTTSBTGMJANCNFSM4QFXYOVQ .

zoezhou1999 commented 4 years ago

Hi, I don't have the results on hand right now. In the paper we report very good results around epoch 20, so I suspect the results are decent around epoch 10. In my experience matting models take a long time to converge for fine structures like hair and transparency.

Hi, thank you for your reply. I have another question. For l1 and lap loss of alpha computed on the whole image, l1 and lap loss of FG and BG computed with alpha>0 and alpha<1, respectively, under the circumstance where there is no FG reconstruction, composite loss of alpha and FB computed on the whole image both, and exclusion loss modified from the original proposed paper from TensorFlow, do you think it is ok or not? Thank you so much~

zoezhou1999 commented 4 years ago

Hmmm, when training, I found a phenomenon that alpha loss drops to a small stage but not decent compared to FB loss in epoch 1 or 2 and then it seems like the alpha loss is bouncing back and forth and further decreasing is too small to having efficient improvement, but the quite large FB loss, may not quite large, but larger than alpha loss, tends to drop faster. It seems that it makes training alpha matting harder. Is it normal or not? I think it prevents the model from further converging. Thank you so much!

kartikwar commented 3 years ago

hey @zoezhou1999 with my personal experience I have seen just normal l1 loss on alpha should be good enough to get initial decent results..also in the paper you can see FB loss doesn't improve the metrics too much so I think you can use all 4 losses on alpha only

EricLe-dev commented 2 years ago

I'm experimenting with this model and found some interesting results. The training code that I wrote was based on the training code of GCA-Matting (suggested by @MarcoForte ). I have some questions that hopefully, will be answered:

  1. Since I want to benchmark different matting models, I'm only interested in the prediction alpha and neglected the foreground color prediction. The loss functions I used only calculate the loss on the prediction alpha before backward propagation. After some epochs, I saw the alpha prediction slightly converged (very slightly) and the foreground color prediction was seemed to be wrong, which was expected. My question is, is this a correct approach?
  2. If I initialize the model with the weight trained by Marco, the losses bounced a bit but generally converged, though very very slightly converged. To make sure the training work well, I trained the model with only 3 images. After 20 epochs, it only showed a very tiny improvement. I believe there must be something wrong here.
  3. I tried initializing the model with the ResNet-50 weights from this repo https://github.com/joe-siyuan-qiao/pytorch-classification/tree/e6355f829e85ac05a71b8889f4fff77b9ab95d0b. The loss kept increasing and the model did not learn at all. After the first EPOCH, I tested the model and saw a very weird result: alpha The trimap was: https://github.com/MarcoForte/FBA_Matting/blob/master/examples/trimaps/troll.png

I used the losses as suggested, which were: L1 alpha, compositional loss, and laplacian loss. The Laplacian loss I used was, from MG-Matting

Can someone please give me some suggestions on what should I do? Thank you so much!