Open yjymickey opened 2 years ago
I read from config.py But i can not find fg and fg floders in the dataset P3M-10k.zip DATASET_PATHS_DICT={ 'P3M10K':{ 'TRAIN':{ 'ROOT_PATH':P3M_DATASET_ROOT_PATH+'train/', 'ORIGINAL_PATH':P3M_DATASET_ROOT_PATH+'train/blurred_image/', 'MASK_PATH':P3M_DATASET_ROOT_PATH+'train/mask/', 'FG_PATH':P3M_DATASET_ROOT_PATH+'train/fg/', 'BG_PATH':P3M_DATASET_ROOT_PATH+'train/bg/',
Hi there,
We do not provide the foregrounds and backgrounds directly, you need to generate them following the closed form method as in the paper Levin, Anat, Dani Lischinski, and Yair Weiss. "A closed-form solution to natural image matting." IEEE transactions on pattern analysis and machine intelligence, 2007
. Please refer to this code-base page: Prepare Datasets (2) for the implementation details. Thanks.
thank for you reply! But the alpha inference reuslt depends on I=αF+(1−α)B; the closed-form result does not real. So if we used the wrong foregroud alpha png,we could not train real network!
only composition loss need fg and bg, you can note it.
Hi @yjymickey,
Since matting is a highly ill-posed problem, the accurent foreground cannot be calculated. Foregrounds calculated following the closed-form paper is an approximate solution to this problem and have been proved performing better than the ones generated by alpha blending
solution, i.e., F = I*α, B = I*(1-α) in many previous matting papers.
Besides, like what @924726976 has pointed out, fg and bg are only used in composition loss in training stage. The advantage of using composition loss while training can be refered to the paper Ning Xu, Brian Price, Scott Cohen, Thomas Huang. "Deep Image Matting" CVPR 2017
. Thanks.
Thanks for your efforts!But there are no 'train/fg/' & 'train/bg/' in the release dataset! Please help! If there are not these files,we can not train for your model!