Closed BanaBanaBanaa closed 2 years ago
Hi,
Please excuse me for the confusion. The slic_loss
in the code is actually semantic loss
in the paper, while the color loss
means the slic loss
mentioned in the paper. I did not notice it when I cleaned up the code.
I am not sure if it is normal or not. It could be various based on the data (e.g., resolution, etc). Because we are using a sum
loss here. The key is if you can observe reasonable superpixel segmentation.
The rgb2lab_torch
should be universal. But I believe we actually did not use this function in this repo.
Thank u ! I can get the resonable superpixel segmentation. This is one example of the superpixel segmentation:
Well, I am not sure about your application.
But this segmentation does not look right to me, if this is the final converged version. The segmentation boundaries are supposed to be aligned with the object/color boundaries, based on the which loss you are using. You may want to double check your implementation.
I only replaced the dataset with the custom one. I will check my implementation.
Good luck.
Hi! In the experiments, u only use the semantic loss and didn't use the slic loss? what u mentioned in other issues 'color loss' is represented the slic loss in the essay? when I use the custom dataset, the loss_sem were very large(eg:17812.12), is that correctly?
second, Is the code: rgb2lab_torch is universal? Or is for BSD dataset only.
Thank u very much!