Closed last-one closed 6 years ago
Yes. The global loss is defined to learn the visible keypoints and provide context for RefineNet for hard keypoints prediction.
Thanks very much. Do you try to make these keypoints, the valid of which < 1.1, don't generate loss, just like: global loss = (global - target) (valid > 1.1) as the calculation of refine loss. But now, the global loss = (global - target (valid > 1.1)). Which one is better? The current solution not only focuses on the visible points, but also wants to make these invisible and non-existing points to be zero.
About this details I think that is not critical. I think the later ohemlike loss is more important. And the data with valid label 1 is only small part of all.
Thanks for your response.
I find the calculation of the global loss and the refine loss is different. The refine loss ignore the valid < 0.1, which doesn't generate the loss. But in the global loss when valid < 1.1, the label change to 0 as global_label, but the global_out doesn't change, which means the global loss only focus these visible points. Is my understanding correct?