Open JoyHuYY1412 opened 5 years ago
Yes. You can think in this way. The mean-shift behaves more like a loss rather than parametric module. But here are two things to notice --
Hope this helps. Thanks for your interest.
On Sun, Dec 2, 2018 at 1:46 AM JoyHuYY1412 notifications@github.com wrote:
Thank you for your work. I am not very familiar with MATLAB coding, but according to your paper, I think the loss you used for instance segmentation is: the margin loss after every step of mean-shift from 0 to T, and each time step t you use the updated data to get the loss lt Also, since the mean-shift kernel is set (according to your paper), all you train is the embedding from pixel to the 64-dimension feature. Am I right?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Pixel-Embedding-for-Instance-Grouping/issues/5, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJKrLwOUxsioWnfPAg8nBR7fsmsUXks5u06FegaJpZM4Y9Uwp .
Yes, thank you for your help. I meant that the kernel's form is fixed since you choose a fixed bandwidth, while the data is updated each mean-shift step.
Thank you for your work. I am not very familiar with MATLAB coding, but according to your paper, I think the loss you used for instance segmentation is: the margin loss after every step of mean-shift from 0 to T, and each time step t you use the updated data to get the loss lt Also, since the mean-shift kernel is set (according to your paper), all you train is the embedding from pixel to the 64-dimension feature. Am I right?