Closed Yoooss closed 1 year ago
In addition to modifying the __init__.py
, you need to register you customized dataset, following this example. Thanks!
I have already register my dataset according to the "1_new_dataset.md" as below.
But It still got the result :
I searched for the solution, but could only find the solution to how to modify the mmdetection to own VOC format dataset. As that blog said , I guess this error is because that maybe after operating as the "1_new_dataset.md" , I need to run some file as below.
But the "1_new_dataset.md" didn't include this procedure, so after trying many methods, I still don't know how to solve this problem. Could you help me?
Thanks for your help.
I tried so many solutions, but still couldn't solve this problem. May I get some advice? Thanks for your help.
Sorry for the late reply. You should implement your dataset in mmdetection, instead of mmselfsup. Thanks!
By "implement your dataset in mmdetection", does it mean I should run the downstream object detection task in mmdetection ? Then how should I use the downloaded mmselfsup checkpoints "byol_resnet50_8xb32-accum16-coslr-200e_in1k_20220225-5c8b2c2e.pth" in mmdetection ? I also tried to download the mmdetection_master project , but don't know how to run a mmselfsup downstream using mmdetection project.
There is no need to run detection in mmdetection. You can follow the steps below;
1) clone mmdetection to your local machine.
2) create your customized dataset in mmdetection, following this tutorial
3) in the root directory of mmdetection, run pip install -v -e .
4) finally, you can use your customizd dataset, using the same command in mmselfsup, just as before.
Thanks for your help. I have done following the step 1~3. But in the step 4 , the same command in mmselfsup couldn't be used . Because in mmselfsup the benchmark task commmand I use is:
But there aren't the same files in mmdetection, such as "mim_dist_train_c4.sh" and "configs/benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_2x_voc0712.py".
So I tried to copy these two files in the mmdetection directory. And run the step 4 :using the same command in mmselfsup.
And I got the error that:
Fully result is : " 2022-06-28 10:20:52,852 - mmdet - INFO - Set random seed to 1752550061, deterministic: False 2022-06-28 10:20:53,016 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': '/home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth'} 2022-06-28 10:20:53,017 - mmcv - INFO - load model from: /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 2022-06-28 10:20:53,018 - mmcv - INFO - load checkpoint from local path: /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 2022-06-28 10:20:53,060 - mmcv - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: layer4.0.conv1.weight, layer4.0.bn1.weight, layer4.0.bn1.bias, layer4.0.bn1.running_mean, layer4.0.bn1.running_var, layer4.0.bn1.num_batches_tracked, layer4.0.conv2.weight, layer4.0.bn2.weight, layer4.0.bn2.bias, layer4.0.bn2.running_mean, layer4.0.bn2.running_var, layer4.0.bn2.num_batches_tracked, layer4.0.conv3.weight, layer4.0.bn3.weight, layer4.0.bn3.bias, layer4.0.bn3.running_mean, layer4.0.bn3.running_var, layer4.0.bn3.num_batches_tracked, layer4.0.downsample.0.weight, layer4.0.downsample.1.weight, layer4.0.downsample.1.bias, layer4.0.downsample.1.running_mean, layer4.0.downsample.1.running_var, layer4.0.downsample.1.num_batches_tracked, layer4.1.conv1.weight, layer4.1.bn1.weight, layer4.1.bn1.bias, layer4.1.bn1.running_mean, layer4.1.bn1.running_var, layer4.1.bn1.num_batches_tracked, layer4.1.conv2.weight, layer4.1.bn2.weight, layer4.1.bn2.bias, layer4.1.bn2.running_mean, layer4.1.bn2.running_var, layer4.1.bn2.num_batches_tracked, layer4.1.conv3.weight, layer4.1.bn3.weight, layer4.1.bn3.bias, layer4.1.bn3.running_mean, layer4.1.bn3.running_var, layer4.1.bn3.num_batches_tracked, layer4.2.conv1.weight, layer4.2.bn1.weight, layer4.2.bn1.bias, layer4.2.bn1.running_mean, layer4.2.bn1.running_var, layer4.2.bn1.num_batches_tracked, layer4.2.conv2.weight, layer4.2.bn2.weight, layer4.2.bn2.bias, layer4.2.bn2.running_mean, layer4.2.bn2.running_var, layer4.2.bn2.num_batches_tracked, layer4.2.conv3.weight, layer4.2.bn3.weight, layer4.2.bn3.bias, layer4.2.bn3.running_mean, layer4.2.bn3.running_var, layer4.2.bn3.num_batches_tracked
2022-06-28 10:20:53,069 - mmdet - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01} 2022-06-28 10:20:53,124 - mmdet - INFO - initialize ResLayerExtraNorm with init_cfg {'type': 'Pretrained', 'checkpoint': '/home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth'} 2022-06-28 10:20:53,124 - mmcv - INFO - load model from: /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 2022-06-28 10:20:53,125 - mmcv - INFO - load checkpoint from local path: /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 2022-06-28 10:20:53,161 - mmcv - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, layer1.0.conv1.weight, layer1.0.bn1.weight, layer1.0.bn1.bias, layer1.0.bn1.running_mean, layer1.0.bn1.running_var, layer1.0.bn1.num_batches_tracked, layer1.0.conv2.weight, layer1.0.bn2.weight, layer1.0.bn2.bias, layer1.0.bn2.running_mean, layer1.0.bn2.running_var, layer1.0.bn2.num_batches_tracked, layer1.0.conv3.weight, layer1.0.bn3.weight, layer1.0.bn3.bias, layer1.0.bn3.running_mean, layer1.0.bn3.running_var, layer1.0.bn3.num_batches_tracked, layer1.0.downsample.0.weight, layer1.0.downsample.1.weight, layer1.0.downsample.1.bias, layer1.0.downsample.1.running_mean, layer1.0.downsample.1.running_var, layer1.0.downsample.1.num_batches_tracked, layer1.1.conv1.weight, layer1.1.bn1.weight, layer1.1.bn1.bias, layer1.1.bn1.running_mean, layer1.1.bn1.running_var, layer1.1.bn1.num_batches_tracked, layer1.1.conv2.weight, layer1.1.bn2.weight, layer1.1.bn2.bias, layer1.1.bn2.running_mean, layer1.1.bn2.running_var, layer1.1.bn2.num_batches_tracked, layer1.1.conv3.weight, layer1.1.bn3.weight, layer1.1.bn3.bias, layer1.1.bn3.running_mean, layer1.1.bn3.running_var, layer1.1.bn3.num_batches_tracked, layer1.2.conv1.weight, layer1.2.bn1.weight, layer1.2.bn1.bias, layer1.2.bn1.running_mean, layer1.2.bn1.running_var, layer1.2.bn1.num_batches_tracked, layer1.2.conv2.weight, layer1.2.bn2.weight, layer1.2.bn2.bias, layer1.2.bn2.running_mean, layer1.2.bn2.running_var, layer1.2.bn2.num_batches_tracked, layer1.2.conv3.weight, layer1.2.bn3.weight, layer1.2.bn3.bias, layer1.2.bn3.running_mean, layer1.2.bn3.running_var, layer1.2.bn3.num_batches_tracked, layer2.0.conv1.weight, layer2.0.bn1.weight, layer2.0.bn1.bias, layer2.0.bn1.running_mean, layer2.0.bn1.running_var, layer2.0.bn1.num_batches_tracked, layer2.0.conv2.weight, layer2.0.bn2.weight, layer2.0.bn2.bias, layer2.0.bn2.running_mean, layer2.0.bn2.running_var, layer2.0.bn2.num_batches_tracked, layer2.0.conv3.weight, layer2.0.bn3.weight, layer2.0.bn3.bias, layer2.0.bn3.running_mean, layer2.0.bn3.running_var, layer2.0.bn3.num_batches_tracked, layer2.0.downsample.0.weight, layer2.0.downsample.1.weight, layer2.0.downsample.1.bias, layer2.0.downsample.1.running_mean, layer2.0.downsample.1.running_var, layer2.0.downsample.1.num_batches_tracked, layer2.1.conv1.weight, layer2.1.bn1.weight, layer2.1.bn1.bias, layer2.1.bn1.running_mean, layer2.1.bn1.running_var, layer2.1.bn1.num_batches_tracked, layer2.1.conv2.weight, layer2.1.bn2.weight, layer2.1.bn2.bias, layer2.1.bn2.running_mean, layer2.1.bn2.running_var, layer2.1.bn2.num_batches_tracked, layer2.1.conv3.weight, layer2.1.bn3.weight, layer2.1.bn3.bias, layer2.1.bn3.running_mean, layer2.1.bn3.running_var, layer2.1.bn3.num_batches_tracked, layer2.2.conv1.weight, layer2.2.bn1.weight, layer2.2.bn1.bias, layer2.2.bn1.running_mean, layer2.2.bn1.running_var, layer2.2.bn1.num_batches_tracked, layer2.2.conv2.weight, layer2.2.bn2.weight, layer2.2.bn2.bias, layer2.2.bn2.running_mean, layer2.2.bn2.running_var, layer2.2.bn2.num_batches_tracked, layer2.2.conv3.weight, layer2.2.bn3.weight, layer2.2.bn3.bias, layer2.2.bn3.running_mean, layer2.2.bn3.running_var, layer2.2.bn3.num_batches_tracked, layer2.3.conv1.weight, layer2.3.bn1.weight, layer2.3.bn1.bias, layer2.3.bn1.running_mean, layer2.3.bn1.running_var, layer2.3.bn1.num_batches_tracked, layer2.3.conv2.weight, layer2.3.bn2.weight, layer2.3.bn2.bias, layer2.3.bn2.running_mean, layer2.3.bn2.running_var, layer2.3.bn2.num_batches_tracked, layer2.3.conv3.weight, layer2.3.bn3.weight, layer2.3.bn3.bias, layer2.3.bn3.running_mean, layer2.3.bn3.running_var, layer2.3.bn3.num_batches_tracked, layer3.0.conv1.weight, layer3.0.bn1.weight, layer3.0.bn1.bias, layer3.0.bn1.running_mean, layer3.0.bn1.running_var, layer3.0.bn1.num_batches_tracked, layer3.0.conv2.weight, layer3.0.bn2.weight, layer3.0.bn2.bias, layer3.0.bn2.running_mean, layer3.0.bn2.running_var, layer3.0.bn2.num_batches_tracked, layer3.0.conv3.weight, layer3.0.bn3.weight, layer3.0.bn3.bias, layer3.0.bn3.running_mean, layer3.0.bn3.running_var, layer3.0.bn3.num_batches_tracked, layer3.0.downsample.0.weight, layer3.0.downsample.1.weight, layer3.0.downsample.1.bias, layer3.0.downsample.1.running_mean, layer3.0.downsample.1.running_var, layer3.0.downsample.1.num_batches_tracked, layer3.1.conv1.weight, layer3.1.bn1.weight, layer3.1.bn1.bias, layer3.1.bn1.running_mean, layer3.1.bn1.running_var, layer3.1.bn1.num_batches_tracked, layer3.1.conv2.weight, layer3.1.bn2.weight, layer3.1.bn2.bias, layer3.1.bn2.running_mean, layer3.1.bn2.running_var, layer3.1.bn2.num_batches_tracked, layer3.1.conv3.weight, layer3.1.bn3.weight, layer3.1.bn3.bias, layer3.1.bn3.running_mean, layer3.1.bn3.running_var, layer3.1.bn3.num_batches_tracked, layer3.2.conv1.weight, layer3.2.bn1.weight, layer3.2.bn1.bias, layer3.2.bn1.running_mean, layer3.2.bn1.running_var, layer3.2.bn1.num_batches_tracked, layer3.2.conv2.weight, layer3.2.bn2.weight, layer3.2.bn2.bias, layer3.2.bn2.running_mean, layer3.2.bn2.running_var, layer3.2.bn2.num_batches_tracked, layer3.2.conv3.weight, layer3.2.bn3.weight, layer3.2.bn3.bias, layer3.2.bn3.running_mean, layer3.2.bn3.running_var, layer3.2.bn3.num_batches_tracked, layer3.3.conv1.weight, layer3.3.bn1.weight, layer3.3.bn1.bias, layer3.3.bn1.running_mean, layer3.3.bn1.running_var, layer3.3.bn1.num_batches_tracked, layer3.3.conv2.weight, layer3.3.bn2.weight, layer3.3.bn2.bias, layer3.3.bn2.running_mean, layer3.3.bn2.running_var, layer3.3.bn2.num_batches_tracked, layer3.3.conv3.weight, layer3.3.bn3.weight, layer3.3.bn3.bias, layer3.3.bn3.running_mean, layer3.3.bn3.running_var, layer3.3.bn3.num_batches_tracked, layer3.4.conv1.weight, layer3.4.bn1.weight, layer3.4.bn1.bias, layer3.4.bn1.running_mean, layer3.4.bn1.running_var, layer3.4.bn1.num_batches_tracked, layer3.4.conv2.weight, layer3.4.bn2.weight, layer3.4.bn2.bias, layer3.4.bn2.running_mean, layer3.4.bn2.running_var, layer3.4.bn2.num_batches_tracked, layer3.4.conv3.weight, layer3.4.bn3.weight, layer3.4.bn3.bias, layer3.4.bn3.running_mean, layer3.4.bn3.running_var, layer3.4.bn3.num_batches_tracked, layer3.5.conv1.weight, layer3.5.bn1.weight, layer3.5.bn1.bias, layer3.5.bn1.running_mean, layer3.5.bn1.running_var, layer3.5.bn1.num_batches_tracked, layer3.5.conv2.weight, layer3.5.bn2.weight, layer3.5.bn2.bias, layer3.5.bn2.running_mean, layer3.5.bn2.running_var, layer3.5.bn2.num_batches_tracked, layer3.5.conv3.weight, layer3.5.bn3.weight, layer3.5.bn3.bias, layer3.5.bn3.running_mean, layer3.5.bn3.running_var, layer3.5.bn3.num_batches_tracked
missing keys in source state_dict: norm.weight, norm.bias, norm.running_mean, norm.running_var
2022-06-28 10:20:53,169 - mmdet - INFO - initialize BBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}]
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
2022-06-28 10:20:55,392 - mmdet - INFO - Automatic scaling of learning rate (LR) has been disabled.
2022-06-28 10:20:55,399 - mmdet - INFO - Start running, host: ls@ls, work_dir: /home/ls/mmdetection-master/work_dirs/pascal_voc/faster_rcnn_r50_c4_mstrain_2x_voc0712/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth
2022-06-28 10:20:55,400 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) CheckpointHook
(LOW ) DistEvalHook
(VERY_LOW ) TextLoggerHook
before_train_epoch:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) NumClassCheckHook
(NORMAL ) DistSamplerSeedHook
(LOW ) IterTimerHook
(LOW ) DistEvalHook
(VERY_LOW ) TextLoggerHook
before_train_iter:
(VERY_HIGH ) StepLrUpdaterHook
(LOW ) IterTimerHook
(LOW ) DistEvalHook
after_train_iter:
(ABOVE_NORMAL) OptimizerHook
(NORMAL ) CheckpointHook
(LOW ) IterTimerHook
(LOW ) DistEvalHook
(VERY_LOW ) TextLoggerHook
after_train_epoch:
(NORMAL ) CheckpointHook
(LOW ) DistEvalHook
(VERY_LOW ) TextLoggerHook
before_val_epoch:
(NORMAL ) NumClassCheckHook
(NORMAL ) DistSamplerSeedHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
before_val_iter: (LOW ) IterTimerHook
after_val_iter: (LOW ) IterTimerHook
after_val_epoch: (VERY_LOW ) TextLoggerHook
after_run: (VERY_LOW ) TextLoggerHook
2022-06-28 10:20:55,400 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs
2022-06-28 10:20:55,400 - mmdet - INFO - Checkpoints will be saved to /home/ls/mmdetection-master/work_dirs/pascal_voc/faster_rcnn_r50_c4_mstrain_2x_voc0712/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth by HardDiskBackend.
Traceback (most recent call last):
File "/home/ls/anaconda3/envs/mmselfsup/lib/python3.7/site-packages/mmdet/.mim/tools/train.py", line 242, in
Killing subprocess 7547
Traceback (most recent call last):
File "/home/ls/anaconda3/envs/mmselfsup/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/ls/anaconda3/envs/mmselfsup/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ls/anaconda3/envs/mmselfsup/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in
I tried to run the task in mmselfsup again. This time raised the error that "TypeError: '_ClassNamespace' object is not callable"
I used the command "bash tools/benchmarks/mmdetection/mim_dist_train_c4.sh configs/benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_2x_voc0712.py /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 1 "
And now the previous error "LsDataset is not in the dataset registry" didn't occur again. Instead , it got the result that ClassNamespace is not callable.
My dataset's annotation is like:
I don't know where is wrong. May I get some advice?
Thanks for your help.
The command I used in mmselfsup is :
And got the error that:
I don't know where caught this error. And I have checked my dataset format as above comment shows.
I now both tried the mmdetection and mmselfsup. Neither could run successfully. the mmdetecton raised the error that: " RuntimeError: Tried to instantiate class 'file.file', but it does not exist! Ensure that it is registered via torch::class_ "
the mmselfsup using the command : "bash tools/benchmarks/mmdetection/mim_dist_train_c4.sh configs/benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_2x_voc0712.py /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 1 " raised the error that: " TypeError: '_ClassNamespace' object is not callable "
I really don't know how to solve this problem. May I get some advice? Thanks for your help.
Sorry for the late reply. Could you please paste configs/benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_2x_voc0712.py
into this issue?
Thanks for your help. I have already run successfully in mmselfsup after many attempts .
According to that when I use VOC dataset, I got the result "mAP 0.404". And in the document "model_zoo.md" , the object detection downstream task also got low accuracy as below.
Since in the paper, object detection downstream task usually got the accuracy which is higher than 0.75. However, in mmselfsup, the accuracy is low. May I ask why the performance is not as good as the official paper said? Does it owe to my wrong operations ?
The figure you put above is about the detection results on COCO, instead of VOC. Thanks!
I know that the figure I put above is about the detection results on COCO. I meant that using the VOC dataset and COCO dataset, the accuracy is both low. For example, in the figure above the mAP on COCO is around 0.38, in the test I runned on VOC, the mAP is about 0.40. May I learn about what causes this low accuracy unlike what these paper said?
I know that the figure I put above is about the detection results on COCO. I meant that using the VOC dataset and COCO dataset, the accuracy is both low. For example, in the figure above the mAP on COCO is around 0.38, in the test I runned on VOC, the mAP is about 0.40. May I learn about what causes this low accuracy unlike what these paper said?
Can you advise on how to use Coco dataset for target detection on mmselfsup. Very much looking forward to your reply.
I have same question as @zgp123-wq
I would like to know the steps to implement object detection with coco dataset format on Faster RCNN model with self-supervised approach.
in this repository , they used only Mask RCNN model with coco dataset format for segmentation issues , while using faster rcnn model only with VOC dataset format. my dataset is in coco format , and i want to test different model within mmdetection library.
Thanks in advanced.
The issue will be closed. If you have any other questions, feel free to open a new one.
I have modified the file as the tutorial "1_new_dataset.md", including create new file "ls_dataset.py" and "ls_data_source.py", and modified the "init.py" according to the tutorial. However when I run the command "bash tools/benchmarks/mmdetection/mim_dist_train_c4.sh configs/benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_2x_voc0712.py /home/ls/mmselfsup/checkpoints/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth 1 "
It raised the error :
May I ask how to solve it? Thanks for your help.