haochenheheda / LVVIS

Large-Vocabulary Video Instance Segmentation dataset
GNU General Public License v3.0
73 stars 1 forks source link

Unable to reproduce Swin-B #13

Open yxchng opened 10 months ago

yxchng commented 10 months ago

The following is my config file

_BASE_: Base-COCO-InstanceSegmentation.yaml
MODEL:
  BACKBONE:
    NAME: "D2SwinTransformer"
  SWIN:
    EMBED_DIM: 128
    DEPTHS: [ 2, 2, 18, 2 ]
    NUM_HEADS: [ 4, 8, 16, 32 ]
    WINDOW_SIZE: 7
  WEIGHTS: "models/swin_base_patch4_window7_224_22k.pkl"
  META_ARCHITECTURE: "OV2Seg"
  SEM_SEG_HEAD:
    NAME: "MaskFormerHead"
    IGNORE_VALUE: 255
    NUM_CLASSES: 1203
    LOSS_WEIGHT: 1.0
    CONVS_DIM: 256
    MASK_DIM: 256
    NORM: "GN"
    # pixel decoder
    PIXEL_DECODER_NAME: "MSDeformAttnPixelDecoder"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
    DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES: ["res3", "res4", "res5"]
    COMMON_STRIDE: 4
    TRANSFORMER_ENC_LAYERS: 6
  MASK_FORMER:
    TRANSFORMER_DECODER_NAME: "MultiScaleMaskedTransformerDecoder"
    TRANSFORMER_IN_FEATURE: "multi_scale_pixel_decoder"
    CLIP_PATH: "datasets/metadata/fg_bg_5_10_lvis_ens.npy"
    DEEP_SUPERVISION: True
    NO_OBJECT_WEIGHT: 0.1
    OBJECT_WEIGHT: 2.0
    CLASS_WEIGHT: 2.0
    MASK_WEIGHT: 5.0
    DICE_WEIGHT: 5.0
    HIDDEN_DIM: 256
    NUM_OBJECT_QUERIES: 300
    NHEADS: 8
    DROPOUT: 0.0
    DIM_FEEDFORWARD: 2048
    ENC_LAYERS: 0
    PRE_NORM: False
    ENFORCE_INPUT_PROJ: False
    SIZE_DIVISIBILITY: 32
    DEC_LAYERS: 7  # 9 decoder layers, add one for the loss on learnable query
    TRAIN_NUM_POINTS: 12544
    OVERSAMPLE_RATIO: 3.0
    IMPORTANCE_SAMPLE_RATIO: 0.75
    TEST:
      SEMANTIC_ON: False
      INSTANCE_ON: True
      PANOPTIC_ON: False
      OVERLAP_THRESHOLD: 0.8
      OBJECT_MASK_THRESHOLD: 0.8

DATASETS:
  TRAIN: ("lvis_v1_train_norare",)
  TEST: ("lvis_v1_val",)

I train with this command:

python train_net.py --num-gpus 4 --resume --dist-url tcp://0.0.0.0:12345 \
    --config-file configs/lvvis/instance-segmentation/ov2seg_swinb_bs16_50ep_lvis.yaml \
    SOLVER.IMS_PER_BATCH 8 \
    MODEL.MASK_FORMER.CLIP_CLASSIFIER True \
    MODEL.MASK_FORMER.NUM_OBJECT_QUERIES 100 \
    MODEL.MASK_FORMER.DEC_LAYERS 7 \
    OUTPUT_DIR ./outputs/ov2seg_swinb_image

And get the following results:

# LVVIS (vs reported 21.1)
Average Precision  AP @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.180

# OVIS (vs reported 17.5)
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | occ=   all | maxDets=100 ] = 0.167

# YTVIS-21 (vs reported 33.9)
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.311

which is much lower than the reported results.

What am I doing wrong? How can I reproduce the results?

haochenheheda commented 10 months ago

Sorry that there is a mistake in the paper. We checked the log and the batch size was 16 indeed. Please try it with bs=16. We will fix the typo in the paper.

yxchng commented 10 months ago

@haochenheheda

bs16 for both swin and r50 or bs16 for swin and bs8 for r50?

haochenheheda commented 10 months ago

bs16 for both