huochaitiantang / pytorch-deep-image-matting

Pytorch implementation of deep image matting
294 stars 71 forks source link

Cannot re-produce the results #18

Closed yxt132 closed 5 years ago

yxt132 commented 5 years ago

Thanks for the great work! I tried to resume training from the pre-trained model (epoch 12, stage 0) by incorporating the latest changes you made. However, the loss does not seem to improve. Would you share with us the hyperparameters you used during training? Do you plan to release the new pre-trained models which incorporated the latest changes (erosion, etc.)? Thanks!

yxt132 commented 5 years ago

also, should I re-size the images to to be consistent with the training setting (e.g. 320x320) or at least close to it to get good results?

huochaitiantang commented 5 years ago

I just trained the new model from the beginning rather than using the pre-trained model(epoch 12, stage0). While testing you should choose the "whole" method.

yxt132 commented 5 years ago

Thanks for your response! Did you use batch size =1 and learning rate = 1e-5?

yxt132 commented 5 years ago

I was able to get similar results as yours for stage 0 now. will try later stages now. I am also thinking to add gradient loss into the loss function to see if better results can be achieved. Please keep us posted on your progress on the stage 1 training. We can compare notes.

FantasyJXF commented 5 years ago

@yxt132 HI, I'd like to ask about the test processdure. The Abode datasets fg imgs all have a shadow around the object, so what is for? and how could I test the demo with original image and trimap?

could you please share one of your test set?

FantasyJXF commented 5 years ago

woman

Any one who can teach me how to test my own image?

yxt132 commented 5 years ago

Here is one of the test sets from Adobe:

foreground: girl-1219339_1920_0 3

background: girl-1219339_1920_0 2

composited image: girl-1219339_1920_0 4

alpha: girl-1219339_1920_0

Trimap: girl-1219339_1920_0 5

FantasyJXF commented 5 years ago

@yxt132 the Adobe foreground image are not the original photo. the woman's hair have some shaow, which might be added manually.

How to test the model with any your own images?

wrrJasmine commented 5 years ago

@yxt132 did you set the batch size=1 and lr=1e-5 to get the similar result as the author? I do not understand the reason to set the batch-size=1. Is it mentioned in the paper?

FantasyJXF commented 5 years ago

The paper just mentioned the Learning rate 1e-5. I used batch size 1 to train, it's very slow, about 34 hours.

Did you know why the fg images of Adobe dataset have shadows around the object?

发自我的 iPhone

在 2019年7月20日,下午4:13,wrrJasmine notifications@github.com 写道:

@yxt132 did you set the batch size=1 and lr=1e-5 to get the similar result as the author? I do not understand the reason to set the batch-size=1. Is it mentioned in the paper?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

wrrJasmine commented 5 years ago

@yxt132 不好意思,我也不知道为什么前景图片会有这样的阴影 我看你之前提到你无法复现作者开源的精度,后面又能跑出与作者差不多的精度,这期间是做了什么操作吗,我现在也跑不出作者开源的差不多的精度,还有,还想再问一下batch size要设为1的理由

yxt132 commented 5 years ago

The shades don't matter as long as you have good alpha channel. I did not do much but just used the author's latest code and get similar performance. I guess the batch size is 1 mainly because of the gpu memory limitation. The author is in better position to answer this question.