m-kashani / MS_Project

MIT License
3 stars 0 forks source link

1- Comparison of Detectrons output with our own labels: #3

Open m-kashani opened 4 years ago

m-kashani commented 4 years ago

First figure out your top 20 images and save them accordingly.


To read and understand the pipeline:


extra Amir

m-kashani commented 4 years ago

├─layers <- custom layers e.g. deformable conv. https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers

m-kashani commented 4 years ago

image

Meta Architecture:
GeneralizedRCNN (meta_arch/rcnn.py)
which has:
Backbone Network:
FPN (backbone/fpn.py)
└ ResNet (backbone/resnet.py)
Region Proposal Network:
RPN(proposal_generator/rpn.py)
├ StandardRPNHead (proposal_generator/rpn.py)
└ RPNOutput (proposal_generator/rpn_outputs.py)
ROI Heads (Box Head):
StandardROIHeads (roi_heads/roi_heads.py)
├ ROIPooler (poolers.py)
├ FastRCNNConvFCHead (roi_heads/box_heads.py)
├ FastRCNNOutputLayers (roi_heads/fast_rcnn.py)
└ FastRCNNOutputs (roi_heads/fast_rcnn.py)
m-kashani commented 4 years ago

look at the outputs. See https://detectron2.readthedocs.io/tutorials/models.html#model-output-format for specification

print(outputs["instances"].pred_classes) print(outputs["instances"].pred_boxes)

image