Open alexanderfrey opened 4 years ago
Hey,
I think the approach would be to just take the top X% by value of negative confidence losses for each image when calculating Lneg. The negative examples should be at most 3x of the positive examples for each image. I have taken the shortcut by just using fractional weighting of 0.3.
On Thu, Aug 13, 2020 at 4:50 AM Alexander Frey notifications@github.com wrote:
Hi,
thanks for your repository and the great work. I was looking for an example of the "hard negative mining strategy" they employ in the paper but couldnt find one. It is just not clear to me how to mitigate the class imbalance when all you can do is feed the complete image into the network ?
Do you have any suggestions on this ?
Thanks, Alexander
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/RvI101/DeepBall-Keras/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACEFKX6MZMXUEUU6I6OU7PDSAOSOZANCNFSM4P6B4NMA .
-- Regards, Arvind
Hi thanks.
I'm considering using sample weight function and feeding the fit function the sample weights with shape (samples, sequences_length). Where sequence_length is the size of the cmap = 200 * 192 I would assign each background pixel a weight of 1 and each ball pixel a weight of 39600 (!) because a ball is 39600 times less likely to happen....
What do you think ?
Best
That actually sounds pretty nifty. Is it possible to feed weights in like that out of the box? I thought it had to be simply one scalar value for each individual example/cmap? I would love to know how it turns out.
Hi,
thanks for your repository and the great work. I was looking for an example of the "hard negative mining strategy" they employ in the paper but couldnt find one. It is just not clear to me how to mitigate the class imbalance when all you can do is feed the complete image into the network ?
Do you have any suggestions on this ?
Thanks, Alexander