Closed luke14free closed 6 years ago
I have run the code, but the result is not promising. :( Any improvement?
The setting in the original paper must be better. I just tried something faster to implement for me and wanted to implement the "Deep Automatic Portrait Matting" as soon as possible. Finally I found that the eccv2016 paper costs too much computation resource that I cannot afford and has limited value for applications. So I gave it up and stopped optimizing it.
@PetroWu so if I understand correctly the paper implements FCN8 which is too computationally intensive and you could only test it on VGG19. If computation resource is the issue I am confident my company could finance it. Would Google Cloud's Tesla K80 GPUs be fine for this job?
I have run the code, but it seems that training process ends on 6K iterations. The loss is still high and not converge. So, what should I do to keep training?
It doesnt even have the matting layer !!! what a waste!!
Hi there, very interested in the project and potentially in contributing (if my limited knowledge can be of any help).
I noticed that the original paper uses as a pretrained network the pascal-fcn8 network (can be obtained here http://www.vlfeat.org/matconvnet/models/beta22/pascal-fcn8s-dag.mat) instead of the VGG19 that you use. Also they suggest using SGD as an optimizer with 1e-4 learning rate (while I believe you use Adam).
So my basic questions is: could it be that VGG19 has already learnt "too much" and is therefore not optimizable any further? Why did you use it instead of FCN8?