Closed yxt132 closed 5 years ago
also, should I re-size the images to to be consistent with the training setting (e.g. 320x320) or at least close to it to get good results?
I just trained the new model from the beginning rather than using the pre-trained model(epoch 12, stage0). While testing you should choose the "whole" method.
Thanks for your response! Did you use batch size =1 and learning rate = 1e-5?
I was able to get similar results as yours for stage 0 now. will try later stages now. I am also thinking to add gradient loss into the loss function to see if better results can be achieved. Please keep us posted on your progress on the stage 1 training. We can compare notes.
@yxt132 HI, I'd like to ask about the test processdure. The Abode datasets fg imgs all have a shadow around the object, so what is for? and how could I test the demo with original image and trimap?
could you please share one of your test set?
Instance segment people. (I think that's not OK ) and get mask
add trimap
finally tun the author's model, get alpha image
add alpha channel to origin image
Any one who can teach me how to test my own image?
Here is one of the test sets from Adobe:
foreground:
background:
composited image:
alpha:
Trimap:
@yxt132 the Adobe foreground image are not the original photo. the woman's hair have some shaow, which might be added manually.
How to test the model with any your own images?
@yxt132 did you set the batch size=1 and lr=1e-5 to get the similar result as the author? I do not understand the reason to set the batch-size=1. Is it mentioned in the paper?
The paper just mentioned the Learning rate 1e-5. I used batch size 1 to train, it's very slow, about 34 hours.
Did you know why the fg images of Adobe dataset have shadows around the object?
发自我的 iPhone
在 2019年7月20日,下午4:13,wrrJasmine notifications@github.com 写道:
@yxt132 did you set the batch size=1 and lr=1e-5 to get the similar result as the author? I do not understand the reason to set the batch-size=1. Is it mentioned in the paper?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
@yxt132 不好意思,我也不知道为什么前景图片会有这样的阴影 我看你之前提到你无法复现作者开源的精度,后面又能跑出与作者差不多的精度,这期间是做了什么操作吗,我现在也跑不出作者开源的差不多的精度,还有,还想再问一下batch size要设为1的理由
The shades don't matter as long as you have good alpha channel. I did not do much but just used the author's latest code and get similar performance. I guess the batch size is 1 mainly because of the gpu memory limitation. The author is in better position to answer this question.
Thanks for the great work! I tried to resume training from the pre-trained model (epoch 12, stage 0) by incorporating the latest changes you made. However, the loss does not seem to improve. Would you share with us the hyperparameters you used during training? Do you plan to release the new pre-trained models which incorporated the latest changes (erosion, etc.)? Thanks!