krumo / Domain-Adaptive-Faster-RCNN-PyTorch

Domain Adaptive Faster R-CNN in PyTorch
MIT License
304 stars 68 forks source link

What is the difference between proposals and da_proposals in the forward method of box_head.py ? #40

Open just-eoghan opened 2 years ago

just-eoghan commented 2 years ago

Hello,

I was wondering what the difference between proposals and da_proposals in the forward method of box_head.py

When debugging the code-base they appear to be identical

image

Is there any difference between the two?

In the box_head.py forward method, proposals are used in the feature extractor passed to the predictor to generate the class_logits and box_regrssion needed to get the loss_classifier and loss_box_reg from the loss_evaluator() ... then da_proposals are used in the same way as above to run the loss_evaluator once again to obtain da_ins_labels.

        if self.training:
            # Faster R-CNN subsamples during training the proposals with a fixed
            # positive / negative ratio
            with torch.no_grad():
                proposals = self.loss_evaluator.subsample(proposals, targets)

        # extract features that will be fed to the final classifier. The
        # feature_extractor generally corresponds to the pooler + heads
        x = self.feature_extractor(features, proposals)
        # final classifier that converts the features into predictions
        class_logits, box_regression = self.predictor(x)

        if not self.training:
            result = self.post_processor((class_logits, box_regression), proposals)
            return x, result, {}, x, None

        loss_classifier, loss_box_reg, _ = self.loss_evaluator(
            [class_logits], [box_regression]
        )

        if self.training:
            with torch.no_grad():
                da_proposals = self.loss_evaluator.subsample_for_da(proposals, targets)

        da_ins_feas = self.feature_extractor(features, da_proposals)
        class_logits, box_regression = self.predictor(da_ins_feas)
        lc2, lbr2, da_ins_labels = self.loss_evaluator(
            [class_logits], [box_regression]
        )

Unless I'm missing something could you not call the loss evaluator once and then return the da_ins_labels like this if they are identical anyway?

        loss_classifier, loss_box_reg, da_ins_labels = self.loss_evaluator(
            [class_logits], [box_regression]
        )

I feel I could be missing something here, it would be great if you could advise. Thanks! :smile: