fuy34 / superpixel_fcn

[CVPR‘20] SpixelFCN: Superpixel Segmentation with Fully Convolutional Network
Other
392 stars 80 forks source link

About The loss #36

Closed BanaBanaBanaa closed 2 years ago

BanaBanaBanaa commented 2 years ago

Hi! In the experiments, u only use the semantic loss and didn't use the slic loss? what u mentioned in other issues 'color loss' is represented the slic loss in the essay? when I use the custom dataset, the loss_sem were very large(eg:17812.12), is that correctly?

second, Is the code: rgb2lab_torch is universal? Or is for BSD dataset only.

Thank u very much!

fuy34 commented 2 years ago

Hi, Please excuse me for the confusion. The slic_loss in the code is actually semantic loss in the paper, while the color loss means the slic loss mentioned in the paper. I did not notice it when I cleaned up the code.

I am not sure if it is normal or not. It could be various based on the data (e.g., resolution, etc). Because we are using a sum loss here. The key is if you can observe reasonable superpixel segmentation.

The rgb2lab_torch should be universal. But I believe we actually did not use this function in this repo.

BanaBanaBanaa commented 2 years ago

Thank u ! I can get the resonable superpixel segmentation. This is one example of the superpixel segmentation: image

fuy34 commented 2 years ago

Well, I am not sure about your application.

But this segmentation does not look right to me, if this is the final converged version. The segmentation boundaries are supposed to be aligned with the object/color boundaries, based on the which loss you are using. You may want to double check your implementation.

BanaBanaBanaa commented 2 years ago

I only replaced the dataset with the custom one. I will check my implementation.

fuy34 commented 2 years ago

Good luck.