Open anonymousiccv opened 3 years ago
For different MIL Learners, their localizers output instance localization scores respectively. For Localization Completeness, we add discrepancy loss among these instance localization scores. But this may hurt the instance localization ability of some localizers, meanwhile deteriorate the detection performance and image classification ability of MIL Learners. So we add cls_loss_0 = cross_entropy_losses(im_cls_prob_0, labels.type(im_cls_prob_0.dtype))
. However, we find that cls_loss_0
and cls_loss_1
could enforce the similiraty of localization scores of different MIL Learners which decreases the Localization Completeness and increases the Localization Redundancy. So the ce losses are multiplied by 0.00001 (0.0001 is also valided in our experiment which show comparable performance with 0.00001). Then cls_loss_0 = 0.00001 * cross_entropy_losses(im_cls_prob_0, labels.type(im_cls_prob_0.dtype))
.
Thanks very much! One more question. Is the implementation of PCL same as the official one? How could we test the standard PCL via this repo?
Our implementation of PCL is almost the same as the official one PCL, expect that our codes are more concise and less complicated. Besides, the method of getting the cluster centers is a little different. There are two ways of getting the cluster centers as mentioned in paper PCL:
And in our implementation of PCL, we just choose the first method that selecting the highest scoring proposals as the proposal cluster centers.
And for testing the standard PCL via this repo, we may modify codes for getting proposal cluster centers. For example, now in D-MIL, the code is instane_selector or get_highest_score_proposals. We can replace it with code borrowed from this link PCL.
I will upload another branch for testing the standard PCL method soon.
Hi Wei, can you tell me why the ce losses are multiplied by 0.00001?
cls_loss_0 = 0.00001 * cross_entropy_losses(im_cls_prob_0, labels.type(im_cls_prob_0.dtype))