Open Jade999 opened 5 years ago
if you use the knn-matting algrothm to generate the ground truth of alpha-matte base on some segementation dataset. Specifically, first you use gen_trimap.py to generate trimaps base on erode and dilate methods on segmentation datasets, and second use the generated trimaps as the input of knn-matting to generate the groud truth of alpha matte, and then use images and the corresponding generated alpha mattes as the train data of all networks.
Is this pipline you use to get your 60k datasets? Wait for your reply, thank you very much.
@Jade999 yes, but, it will take very very very long time to get alpha.... so, I just get 6K+ image, then use the other mask GTs as alpha GTs directly.....
Are you using the alpha maps generated from knn-matting algorithm as your ground truth alpha maps? Or did you have some other dataset of images and alpha map with higher accuracy? I noticed aisegment shared their dataset, however the alpha map is still not accurate enough. In the paper, they mentioned they have constructed a large dataset, did they mention how to access it?
Are you using the alpha maps generated from knn-matting algorithm as your ground truth alpha maps? Or did you have some other dataset of images and alpha map with higher accuracy? I noticed aisegment shared their dataset, however the alpha map is still not accurate enough. In the paper, they mentioned they have constructed a large dataset, did they mention how to access it?
@Capchenxi However, there is alpha images available. https://github.com/aisegmentcn/matting_human_datasets But, I found that it needs trimap when train this code, So, how can I get trimap?
The trimap should be generated by algorithm. @Jason-xin
@Jason-xin you can generate the trimaps using ./data/gen_trimap.sh
or simply tweak gen_trimap.py
I found that using
dilated = cv2.dilate(msk, kernel, iterations=3) * 255
eroded = cv2.erode(msk, kernel, iterations=1) * 255
instead of
dilated = cv2.dilate(msk, kernel, iterations=1) * 255
eroded = cv2.erode(msk, kernel, iterations=1) * 255
was giving better results. Basically, you erode once (shrinking the mask) and dilate twice (expand the mask), keeping a trimap with a grey (pix value = 128) area slightly wider.
In the paper, https://arxiv.org/pdf/1703.03872.pdf fig 7 you can see the SAD error vs trimap dilatation
hi I have a question about how the mask image is obtained.
@lizhengwei1992 i tyr to use a mask image to test, when i run gen_trimap.py, i got below error:
Traceback (most recent call last):
File "data/gen_trimap.py", line 74, in
what's reason? thank you!
The test mask is below:
@lizhengwei1992 i tyr to use a mask image to test, when i run gen_trimap.py, i got below error:
Traceback (most recent call last): File "data/gen_trimap.py", line 74, in main() File "data/gen_trimap.py", line 68, in main trimap = erode_dilate(msk, size=(args.size,args.size)) File "data/gen_trimap.py", line 36, in erode_dilate assert(cnt1 == cnt2 + cnt3) AssertionError
what's reason? thank you!
The test mask is below:
you should let your mask image pixel only 0 or 255: msk[msk>0] = 255
Hi, can you describe the mothods how you get your 60k train data from https://github.com/lizhengwei1992/Fast_Portrait_Segmentation/tree/master/dataset#dataset. Thank you very much for your work .