Closed sunshineatnoon closed 3 years ago
Hi,
Thank you for your interest in our work. The m
is set to be 0.003 for segmentation (with semantic loss) and 30/60 (with color loss) for stereo matching, and S
is the cell size based on your choice.
More details can be found in the "implement details" and "Application to Stereo Matching" sections of the paper. And you can modify them here in the code.
https://github.com/fuy34/superpixel_fcn/blob/0f07be910e41ac73404de23caa99f0e454c6440c/main.py#L68
Thanks for the quick reply! Is it possible to share the code for the SLIC loss?
I tried to implement this loss, but observe many small super pixels during training. I used the CIELab image feature and didn't use the semantic masks or depth during training. The visualization can be found here.
I see. The BSDS500 dataset is a little noisy in the color domain. You may need to perform bilateral filtering or any other filtering method you like before sending them to the network. For the stereo matching task, the images are cleaner and no need for this step. Good luck!
Got it. Thanks so much!
Hi, Thanks for open-sourcing this awesome work. I noticed that this repo is for the segmentation and depth estimation tasks. Could you also provide the code or hyper-parameters for Eq.(4) in the paper? Thanks!