deeplearning-wisc / vos

source code for ICLR'22 paper "VOS: Learning What You Don’t Know by Virtual Outlier Synthesis"
Apache License 2.0
310 stars 54 forks source link

TypeError: forward() missing 1 required positional argument: 'iteration' #2

Closed Domenicotech closed 2 years ago

Domenicotech commented 2 years ago

Hi, Congratulations for the work done. I find the idea very interesting.

I'm trying to reproduce the code on colab. I'm running the training with the voc dataset.

I report that I had to modify the line 19 in detection/core/dataset/setup_datasets.py, in order to point to my actual voc dataset path.

setup_voc_dataset(dataset_dir + 'VOC_0712_converted')

When i run !python train_net.py --dataset-dir /content/ --num-gpus 1 --config-file VOC-Detection/faster-rcnn/vos.yaml --random-seed 0 --resume

It returns:

`Command Line Args: Namespace(config_file='VOC-Detection/faster-rcnn/vos.yaml', dataset_dir='/content/', dist_url='tcp://127.0.0.1:49152', eval_only=False, image_corruption_level=0, inference_config='', iou_correct=0.5, iou_min=0.1, machine_rank=0, min_allowed_score=0.0, num_gpus=1, num_machines=1, opts=[], random_seed=0, resume=True, savefigdir='./savefig', test_dataset='', visualize=0) [02/11 09:18:10 detectron2]: Rank of current process: 0. World size: 1 [02/11 09:18:10 detectron2]: Environment info:


sys.platform linux Python 3.7.12 (default, Jan 15 2022, 18:48:18) [GCC 7.5.0] numpy 1.19.5 detectron2 0.6 @/usr/local/lib/python3.7/dist-packages/detectron2 Compiler GCC 7.5 CUDA compiler CUDA 11.1 detectron2 arch flags 6.0 DETECTRON2_ENV_MODULE PyTorch 1.10.0+cu111 @/usr/local/lib/python3.7/dist-packages/torch PyTorch debug build False GPU available Yes GPU 0 Tesla P100-PCIE-16GB (arch=6.0) Driver version 460.32.03 CUDA_HOME /usr/local/cuda Pillow 7.1.2 torchvision 0.11.1+cu111 @/usr/local/lib/python3.7/dist-packages/torchvision torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6 fvcore 0.1.5.post20220119 iopath 0.1.9 cv2 4.1.2


PyTorch built with:

[02/11 09:18:10 detectron2]: Command line arguments: Namespace(config_file='/content/vos/detection/configs/VOC-Detection/faster-rcnn/vos.yaml', dataset_dir='/content/', dist_url='tcp://127.0.0.1:49152', eval_only=False, image_corruption_level=0, inference_config='', iou_correct=0.5, iou_min=0.1, machine_rank=0, min_allowed_score=0.0, num_gpus=1, num_machines=1, opts=[], random_seed=0, resume=True, savefigdir='./savefig', test_dataset='', visualize=0) [02/11 09:18:10 detectron2]: Contents of args.config_file=/content/vos/detection/configs/VOC-Detection/faster-rcnn/vos.yaml: BASE: "../../Base-RCNN-FPN.yaml" MODEL: META_ARCHITECTURE: "GeneralizedRCNNLogisticGMM" WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"

WEIGHTS: "./data/VOC-Detection/faster-rcnn/faster_rcnn_R_50_FPN_all_logistic/random_seed_0/model_final.pth"

PROPOSAL_GENERATOR:

NAME: "RPNLogistic"

MASK_ON: False RESNETS: DEPTH: 50 ROI_HEADS: NAME: "ROIHeadsLogisticGMMNew" NUM_CLASSES: 20 INPUT: MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800) MIN_SIZE_TEST: 800 DATASETS: TRAIN: ('voc_custom_train',) TEST: ('voc_custom_val',) SOLVER: IMS_PER_BATCH: 8 BASE_LR: 0.02 STEPS: (12000, 16000) MAX_ITER: 18000 # 17.4 epochs WARMUP_ITERS: 100 VOS: STARTING_ITER: 12000 SAMPLE_NUMBER: 1000 DATALOADER: NUM_WORKERS: 2 # Depends on the available memory

[02/11 09:18:10 detectron2]: Running with full config: CUDNN_BENCHMARK: false DATALOADER: ASPECT_RATIO_GROUPING: true FILTER_EMPTY_ANNOTATIONS: true NUM_WORKERS: 2 REPEAT_THRESHOLD: 0.0 SAMPLER_TRAIN: TrainingSampler DATASETS: PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000 PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000 PROPOSAL_FILES_TEST: [] PROPOSAL_FILES_TRAIN: [] TEST:

[02/11 09:18:10 detectron2]: Full config saved to /content/vos/detection/data/VOC-Detection/faster-rcnn/vos/random_seed_0/config.yaml

GeneralizedRCNNLogisticGMM( (backbone): FPN( (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (top_block): LastLevelMaxPool() (bottom_up): ResNet( (stem): BasicStem( (conv1): Conv2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) ) (res2): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv1): Conv2d( 64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv2): Conv2d( 64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05) ) (conv3): Conv2d( 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) ) ) (res3): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv1): Conv2d( 256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv2): Conv2d( 128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05) ) (conv3): Conv2d( 128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) ) ) (res4): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) (conv1): Conv2d( 512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (3): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (4): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) (5): BottleneckBlock( (conv1): Conv2d( 1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv2): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05) ) (conv3): Conv2d( 256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05) ) ) ) (res5): Sequential( (0): BottleneckBlock( (shortcut): Conv2d( 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) (conv1): Conv2d( 1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (1): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) (2): BottleneckBlock( (conv1): Conv2d( 2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv2): Conv2d( 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05) ) (conv3): Conv2d( 512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05) ) ) ) ) ) (proposal_generator): RPN( (rpn_head): StandardRPNHead( (conv): Conv2d( 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1) (activation): ReLU() ) (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) (anchor_generator): DefaultAnchorGenerator( (cell_anchors): BufferList() ) ) (roi_heads): ROIHeadsLogisticGMMNew( (box_pooler): ROIPooler( (level_poolers): ModuleList( (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True) (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True) (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True) (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True) ) ) (box_head): FastRCNNConvFCHead( (flatten): Flatten(start_dim=1, end_dim=-1) (fc1): Linear(in_features=12544, out_features=1024, bias=True) (fc_relu1): ReLU() (fc2): Linear(in_features=1024, out_features=1024, bias=True) (fc_relu2): ReLU() ) (box_predictor): FastRCNNOutputLayers( (cls_score): Linear(in_features=1024, out_features=21, bias=True) (bbox_pred): Linear(in_features=1024, out_features=80, bias=True) ) (logistic_regression): Linear(in_features=1, out_features=2, bias=True) (noise): GaussianNoise() (weight_energy): Linear(in_features=20, out_features=1, bias=True) (cos): MSELoss() ) ) [02/11 09:18:15 d2.data.datasets.coco]: Loaded 16551 images in COCO format from /content/VOC_0712_converted/voc0712_train_all.json [02/11 09:18:15 d2.data.build]: Removed 0 images with no usable annotations. 16551 images left. [02/11 09:18:15 d2.data.build]: Distribution of instances among all 20 categories: category #instances category #instances category #instances
person 15576 bird 1820 cat 1616
cow 1058 dog 2079 horse 1156
sheep 1347 airplane 1285 bicycle 1208
boat 1397 bus 909 car 4008
motorcycle 1141 train 984 bottle 2116
chair 4338 dining table 1057 potted plant 1724
couch 1211 tv 1193
total 47223
[02/11 09:18:15 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()] [02/11 09:18:15 d2.data.build]: Using training sampler TrainingSampler [02/11 09:18:15 d2.data.common]: Serializing 16551 elements to byte tensors and concatenating them all ... [02/11 09:18:15 d2.data.common]: Serialized dataset takes 6.22 MiB [02/11 09:18:17 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://ImageNetPretrained/MSRA/R-50.pkl ... [02/11 09:18:17 d2.checkpoint.c2_model_loading]: Renaming Caffe2 weights ...... [02/11 09:18:17 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone.bottom_up: Names in Model Names in Checkpoint Shapes
res2.0.conv1.* res2_0branch2a{bn_*,w} (64,) (64,) (64,) (64,) (64,64,1,1)
res2.0.conv2.* res2_0branch2b{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
res2.0.conv3.* res2_0branch2c{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
res2.0.shortcut.* res2_0branch1{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
res2.1.conv1.* res2_1branch2a{bn_*,w} (64,) (64,) (64,) (64,) (64,256,1,1)
res2.1.conv2.* res2_1branch2b{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
res2.1.conv3.* res2_1branch2c{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
res2.2.conv1.* res2_2branch2a{bn_*,w} (64,) (64,) (64,) (64,) (64,256,1,1)
res2.2.conv2.* res2_2branch2b{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
res2.2.conv3.* res2_2branch2c{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
res3.0.conv1.* res3_0branch2a{bn_*,w} (128,) (128,) (128,) (128,) (128,256,1,1)
res3.0.conv2.* res3_0branch2b{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
res3.0.conv3.* res3_0branch2c{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
res3.0.shortcut.* res3_0branch1{bn_*,w} (512,) (512,) (512,) (512,) (512,256,1,1)
res3.1.conv1.* res3_1branch2a{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
res3.1.conv2.* res3_1branch2b{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
res3.1.conv3.* res3_1branch2c{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
res3.2.conv1.* res3_2branch2a{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
res3.2.conv2.* res3_2branch2b{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
res3.2.conv3.* res3_2branch2c{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
res3.3.conv1.* res3_3branch2a{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
res3.3.conv2.* res3_3branch2b{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
res3.3.conv3.* res3_3branch2c{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
res4.0.conv1.* res4_0branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,512,1,1)
res4.0.conv2.* res4_0branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.0.conv3.* res4_0branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res4.0.shortcut.* res4_0branch1{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,512,1,1)
res4.1.conv1.* res4_1branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
res4.1.conv2.* res4_1branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.1.conv3.* res4_1branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res4.2.conv1.* res4_2branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
res4.2.conv2.* res4_2branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.2.conv3.* res4_2branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res4.3.conv1.* res4_3branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
res4.3.conv2.* res4_3branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.3.conv3.* res4_3branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res4.4.conv1.* res4_4branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
res4.4.conv2.* res4_4branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.4.conv3.* res4_4branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res4.5.conv1.* res4_5branch2a{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
res4.5.conv2.* res4_5branch2b{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
res4.5.conv3.* res4_5branch2c{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
res5.0.conv1.* res5_0branch2a{bn_*,w} (512,) (512,) (512,) (512,) (512,1024,1,1)
res5.0.conv2.* res5_0branch2b{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
res5.0.conv3.* res5_0branch2c{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
res5.0.shortcut.* res5_0branch1{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,1024,1,1)
res5.1.conv1.* res5_1branch2a{bn_*,w} (512,) (512,) (512,) (512,) (512,2048,1,1)
res5.1.conv2.* res5_1branch2b{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
res5.1.conv3.* res5_1branch2c{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
res5.2.conv1.* res5_2branch2a{bn_*,w} (512,) (512,) (512,) (512,) (512,2048,1,1)
res5.2.conv2.* res5_2branch2b{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
res5.2.conv3.* res5_2branch2c{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
stem.conv1.norm.* res_conv1bn* (64,) (64,) (64,) (64,)
stem.conv1.weight conv1_w (64, 3, 7, 7)

WARNING [02/11 09:18:18 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint: backbone.fpn_lateral2.{bias, weight} backbone.fpn_lateral3.{bias, weight} backbone.fpn_lateral4.{bias, weight} backbone.fpn_lateral5.{bias, weight} backbone.fpn_output2.{bias, weight} backbone.fpn_output3.{bias, weight} backbone.fpn_output4.{bias, weight} backbone.fpn_output5.{bias, weight} proposal_generator.rpn_head.anchor_deltas.{bias, weight} proposal_generator.rpn_head.conv.{bias, weight} proposal_generator.rpn_head.objectness_logits.{bias, weight} roi_heads.box_head.fc1.{bias, weight} roi_heads.box_head.fc2.{bias, weight} roi_heads.box_predictor.bbox_pred.{bias, weight} roi_heads.box_predictor.cls_score.{bias, weight} roi_heads.logistic_regression.{bias, weight} roi_heads.noise.noise roi_heads.weight_energy.{bias, weight} WARNING [02/11 09:18:18 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model: fc1000.{bias, weight} stem.conv1.bias [02/11 09:18:18 d2.engine.train_loop]: Starting training from iteration 0 ERROR [02/11 09:18:18 d2.engine.train_loop]: Exception during training: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py", line 149, in train self.run_step() File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py", line 494, in run_step self._trainer.run_step() File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py", line 273, in run_step loss_dict = self.model(data) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, kwargs) TypeError: forward() missing 1 required positional argument: 'iteration' [02/11 09:18:18 d2.engine.hooks]: Total training time: 0:00:00 (0:00:00 on hooks) [02/11 09:18:18 d2.utils.events]: iter: 0 lr: N/A max_mem: 245M Traceback (most recent call last): File "train_net.py", line 110, in args=(args,), File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/launch.py", line 82, in launch main_func(args) File "train_net.py", line 94, in main return trainer.train() File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py", line 484, in train super().train(self.start_iter, self.max_iter) File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py", line 149, in train self.run_step() File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py", line 494, in run_step self._trainer.run_step() File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py", line 273, in run_step loss_dict = self.model(data) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) TypeError: forward() missing 1 required positional argument: 'iteration'`

Domenicotech commented 2 years ago

Sorry, my careless mistake!

Jain-Archit commented 2 years ago

Hi,

I am facing the same error. Do you know what did you do wrong?