I have question regarding alpha needed during testing phase. Per my understanding, paper states that for testing/inference only original image & corresponding trimap are needed and encoder/decoder framework will predict the alpha. On the other hand Ground truth alpha (generated by photoshop) is required for training.
I am aware that originally authors calculated alpha manually for all 431 objects and after composing distributed among training and testing sets. So training and testing both gets high quality alpha produced manually. This is fine is testing images of authors are used.
However for the case if I chose another image from internet
Why alpha being passed during testing phase to generate trimap? and how do I get that alpha? Is it the case that mask generated by any segmentation framework can be used as initial/rough alpha as input to generate_trimap and by running through the network will produce the relatively accurate alpha?
Hi
I have question regarding alpha needed during testing phase. Per my understanding, paper states that for testing/inference only original image & corresponding trimap are needed and encoder/decoder framework will predict the alpha. On the other hand Ground truth alpha (generated by photoshop) is required for training.
I am aware that originally authors calculated alpha manually for all 431 objects and after composing distributed among training and testing sets. So training and testing both gets high quality alpha produced manually. This is fine is testing images of authors are used.
However for the case if I chose another image from internet Why alpha being passed during testing phase to generate trimap? and how do I get that alpha? Is it the case that mask generated by any segmentation framework can be used as initial/rough alpha as input to generate_trimap and by running through the network will produce the relatively accurate alpha?
Thank you.