yuantn / MI-AOD

Code for Multiple Instance Active Learning for Object Detection, CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/papers/Yuan_Multiple_Instance_Active_Learning_for_Object_Detection_CVPR_2021_paper.pdf
Apache License 2.0
329 stars 43 forks source link

higher performance on random,entropy and coreset methods compared to paper #67

Closed ChenggangLu closed 2 years ago

ChenggangLu commented 2 years ago

image I try to reproduce random,entropy and coreset methods based on mmdetection. However, results are much higher compared to paper. Do you have any idea about this? Thanks.

ChenggangLu commented 2 years ago

And I have tried 3 times with different init dataset, the result seems not change.

yuantn commented 2 years ago

The results of all methods except MI-AOD using SSD is copied from Figure 6 in LL4AL and Figure 4 in CDAL. You can ask them for more reasons.

ChenggangLu commented 2 years ago

Okay. Thank you for your reply.

DietDietDiet commented 2 years ago

@ChenggangLu Would u mind sharing the scripts used to evaluate the entropy and coreset method? I am also working on similar issues, thanks!

ChenggangLu commented 2 years ago

The dataloader is initialized in the same way as author here

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F

from mmdet.utils import get_root_logger

class SelectionMethod:
    """
    Abstract base class for selection methods,
    which allow to select a subset of indices from the pool set as the next batch to label for Batch Active Learning.
    """
    def __init__(self):
        super().__init__()
        self.logger=get_root_logger()

    def select(self, selection_size):
        """
        Select selection_size elements from the pool set
        (which is assumed to be given in the constructor of the corresponding subclass).
        This method needs to be implemented by subclasses.
        It is assumed that this method is only called once per object, since it can modify the state of the object.

        Args:
            selection_size (int): how much images selected in one cycle

        Returns:
            idxs_selected (np.ndarray): index of chosen images
        """
        raise NotImplementedError()

class RandomSelectionMethod(SelectionMethod):
    def __init__(self,idxs_unlabeled):
        super().__init__()
        self.idxs_unlabeled=idxs_unlabeled

    def select(self, selection_size):
        np.random.shuffle(self.idxs_unlabeled)
        return self.idxs_unlabeled[:selection_size]

class EntropySelectionMethod(SelectionMethod):
    def __init__(self,data_loader,idxs_unlabeled,model):
        super().__init__()
        self.data_loader=data_loader
        self.idxs_unlabeled=idxs_unlabeled
        self.model=model

    def select(self, selection_size):
        self.logger.info(f'------ computing image entropy ------')
        entropy=torch.zeros(len(self.idxs_unlabeled)).cuda()
        for i,data in enumerate(self.data_loader):
            with torch.no_grad():
                if i in self.idxs_unlabeled:
                    idx=np.where(i==self.idxs_unlabeled)[0][0]
                    img=data['img'][0].cuda()
                    # calculate entropy
                    feat = self.model.extract_feat(img)
                    cls_scores, _ = self.model.bbox_head(feat)
                    all_cls_scores = torch.cat([
                        s.permute(0, 2, 3, 1).reshape(
                            s.shape[0], -1, self.model.bbox_head.cls_out_channels) for s in cls_scores
                    ], 1)
                    all_cls_scores=F.softmax(all_cls_scores,dim=2)
                    entropy[idx]=torch.sum(all_cls_scores*torch.log(all_cls_scores),dim=2).mean()
                if i%500==0:
                    self.logger.info(f'------ {i}/{len(self.data_loader.dataset)} ------')
        arg=entropy.argsort().cpu().numpy()
        return self.idxs_unlabeled[arg[:selection_size]]

class CoresetSelectionMethod(SelectionMethod):
    def __init__(self,data_loader,idxs_labeled,idxs_unlabeled,model):
        super().__init__()
        self.data_loader=data_loader
        self.idxs_labeled=idxs_labeled
        self.idxs_unlabeled=idxs_unlabeled
        self.model=model

    def select(self, selection_size):
        self.logger.info(f'------ computing image feat ------')
        feats=compute_feat(self.data_loader,self.model)
        feats_labeled=feats[self.idxs_labeled]
        feats_unlabeled=feats[self.idxs_unlabeled]

        self.logger.info(f'------ computing distances ------')
        dist=torch.cdist(feats_unlabeled,feats_labeled,p=2).min(dim=1)[0]

        self.logger.info(f'------ choose images ------')
        idxs_chosen=[]
        for i in range (selection_size):
            idx=dist.argmax()
            idxs_chosen.append(self.idxs_unlabeled[idx])
            new_dist=torch.cdist(feats_unlabeled,feats_unlabeled[idx].unsqueeze(0),p=2).squeeze()
            dist=torch.where(new_dist<dist,new_dist,dist)
            if i%20==0:
                self.logger.info(f'------ {i}/{selection_size} ------')
        return np.array(idxs_chosen)

def compute_feat(data_loader,model):
    logger=get_root_logger()
    feats=torch.zeros([len(data_loader.dataset),1024]).cuda()
    for i,data in enumerate(data_loader):
        with torch.no_grad():
            img=data['img'][0].cuda()
            feat = model.extract_feat(img)
            # feat shape [1024]
            feats[i]=nn.AdaptiveAvgPool2d((1, 1))(feat[1]).view(-1)
            if i%500==0:
                logger.info(f'------ {i}/{len(data_loader.dataset)} ------')
    return feats
ChenggangLu commented 2 years ago

@ChenggangLu Would u mind sharing the scripts used to evaluate the entropy and coreset method? I am also working on similar issues, thanks! I use the same settings in Learning Loss.

ChenggangLu commented 2 years ago

I find out the reason. The data augmentation used in mmdetection is a little different from SSD in LL4AL. It seems that SSD in LL4AL miss sampling a patch the minimum Jaccard overlap with the objects is 0.5. When i change the settings in mmdetection to be the same as SSD in LL4AL, the results are almost the same.

yuantn commented 2 years ago

OK, thanks for your research.

jiahao12121 commented 2 years ago

Hi, did you rerun the SSD with Pascal VOC? How about your results? My results are lower than the paper,but my log file is same to the author'.

ChenggangLu commented 2 years ago

Hi, did you rerun the SSD with Pascal VOC? How about your results? My results are lower than the paper,but my log file is same to the author'.

Do you mean use MIAOD to select samples to train SSD with PASCAL VOC? I haven't tried yet.

jiahao12121 commented 2 years ago

Hi, did you rerun the SSD with Pascal VOC? How about your results? My results are lower than the paper,but my log file is same to the author'.

Do you mean use MIAOD to select samples to train SSD with PASCAL VOC? I haven't tried yet.

yes,i train SSD with VOC,but mAP is lower than the paper

ChenggangLu commented 2 years ago

@jiahao12121 Ok. I will try to use MIAOD and reply you later after getting results.

jiahao12121 commented 2 years ago

@jiahao12121 Ok. I will try to use MIAOD and reply you later after getting results. Thank you! Recently i'm studying the code, i have some valuable questions, may i discuss with you? would you mind leaving a mail?

ChenggangLu commented 2 years ago

@jiahao12121 Ok. I will try to use MIAOD and reply you later after getting results. Thank you! Recently i'm studying the code, i have some valuable questions, may i discuss with you? would you mind leaving a mail?

My email address is luchenggang@zju.edu.cn

DietDietDiet commented 2 years ago

@ChenggangLu Thanks for sharing the code and sorry for the late reply. For your original question, personally I think it might be due to the use of pretrained models.