Closed balandongiv closed 2 years ago
Hi @balandongiv It may be a problem with the dataset path. Please share us with you full configs,
Thanks for replying @Mountchicken .
The images were stored as followed
#/content/batch2_v2
———batch2_v2
|__ mmocr_compatible_annotation
|__ img1.png
|__ img1.json
|__ img2.png
|__ img2.json
|__ img3.png
|__ img3.json
and
#/content/batch2_v2/mmocr_compatible_annotation
———mmocr_compatible_annotation
|__ crops ## contain crops images for txt recog
|__ instances_training.txt
|__ train_label.jsonl
The config is as below
_base_ = [
'/content/mmocr/configs/_base_/default_runtime.py',
'/content/mmocr/configs/_base_/schedules/schedule_sgd_1200e.py',
'/content/mmocr/configs/_base_/det_models/dbnetpp_r50dcnv2_fpnc.py',
'/content/mmocr/configs/_base_/det_pipelines/dbnet_pipeline.py',
]
# The YML suggested the DBNetpp is with Training Resources: 1x Nvidia A100
# Location where the annotation and crop images are being stored
root='/content/wdr'
# Set up working dir to save files and logs.
work_dir =f'{root}/train_detect/base_dbnetpp'
train_root_custm1 ='/content/batch2_v2'
train_custm1 = dict( # This is the new one by
type='TextDetDataset',
img_prefix=train_root_custm1,
ann_file=f'{train_root_custm1}/mmocr_compatible_annotation/instances_training.txt',
loader=dict(
type='AnnFileLoader',
repeat=300,
file_format='txt',
parser=dict(
type='LineJsonParser',
keys=['file_name', 'height', 'width', 'annotations'])),
pipeline=None,
test_mode=False)
val_custm1 = dict( # This is the new one by
type='TextDetDataset',
img_prefix=train_root_custm1,
ann_file=f'{train_root_custm1}/mmocr_compatible_annotation/instances_training.txt',
loader=dict(
type='AnnFileLoader',
repeat=1,
file_format='txt',
parser=dict(
type='LineJsonParser',
keys=['file_name', 'height', 'width', 'annotations'])),
pipeline=None,
test_mode=False)
train_list = [train_custm1]
test_list = [val_custm1]
train_pipeline_r50dcnv2 = {{_base_.train_pipeline_r50dcnv2}}
test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}}
data = dict(
samples_per_gpu=16, # Default 32
workers_per_gpu=8,
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1),
train=dict(
type='UniformConcatDataset',
datasets=train_list,
pipeline=train_pipeline_r50dcnv2),
val=dict(
type='UniformConcatDataset',
datasets=test_list,
pipeline=test_pipeline_4068_1024),
test=dict(
type='UniformConcatDataset',
datasets=test_list,
pipeline=test_pipeline_4068_1024))
evaluation = dict(
interval=20,
metric='hmean-iou')
Update
Remove the crop
folder
#/content/batch2_v2/mmocr_compatible_annotation
———mmocr_compatible_annotation
|__ instances_training.txt
|__ train_label.jsonl
also result into similar Warning
being thrown out
Hi @Mountchicken , I notice this issue also happen when I am using the toy-dataset
which can be reproduced via the following Notebook
In summary here the steps
First convert the coco to mmcor-compatible format via !python /mmocr/tools/data/common/labelme_converter.py /mmocr/tests/data/toy_dataset/labelme /mmocr/tests/data/toy_dataset/imgs/test /content --tasks recog --format jsonl
and with the following config
_base_ = [
'/mmocr/configs/_base_/default_runtime.py',
'/mmocr/configs/_base_/schedules/schedule_sgd_1200e.py',
'/mmocr/configs/_base_/det_models/dbnetpp_r50dcnv2_fpnc.py',
'/mmocr/configs/_base_/det_pipelines/dbnet_pipeline.py',
]
# /mmocr/configs/_base_/default_runtime.py
# The YML suggested the DBNetpp is with Training Resources: 1x Nvidia A100
# Location where the annotation and crop images are being stored
root='/content/wdr'
# Set up working dir to save files and logs.
work_dir =f'{root}/train_detect/base_dbnetpp'
train_root_custm1 ='/mmocr/tests/data/toy_dataset/imgs/test'
train_custm1 = dict( # This is the new one by
type='TextDetDataset',
img_prefix=train_root_custm1,
ann_file='/content/instances_training.txt',
loader=dict(
type='AnnFileLoader',
repeat=300,
file_format='txt',
parser=dict(
type='LineJsonParser',
keys=['file_name', 'height', 'width', 'annotations'])),
pipeline=None,
test_mode=False)
val_custm1 = dict( # This is the new one by
type='TextDetDataset',
img_prefix=train_root_custm1,
ann_file=f'/content/instances_training.txt',
loader=dict(
type='AnnFileLoader',
repeat=1,
file_format='txt',
parser=dict(
type='LineJsonParser',
keys=['file_name', 'height', 'width', 'annotations'])),
pipeline=None,
test_mode=False)
train_list = [train_custm1]
test_list = [val_custm1]
train_pipeline_r50dcnv2 = {{_base_.train_pipeline_r50dcnv2}}
test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}}
data = dict(
samples_per_gpu=16, # Default 32
workers_per_gpu=8,
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1),
train=dict(
type='UniformConcatDataset',
datasets=train_list,
pipeline=train_pipeline_r50dcnv2),
val=dict(
type='UniformConcatDataset',
datasets=test_list,
pipeline=test_pipeline_4068_1024),
test=dict(
type='UniformConcatDataset',
datasets=test_list,
pipeline=test_pipeline_4068_1024))
evaluation = dict(
interval=20,
metric='hmean-iou')
The train the model produce the following warning/error
2022-07-08 01:26:25,505 - mmocr - INFO - workflow: [('train', 1)], max: 6 epochs 2022-07-08 01:26:25,507 - mmocr - INFO - Checkpoints will be saved to /content/wdr/train_detect/base_dbnetpp by HardDiskBackend. prepare index 716 with error 'Polygon' object is not iterable prepare index 1040 with error 'Polygon' object is not iterable Warning: skip broken file {'file_name': 'img_1.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 0, 'category_id': 1, 'bbox': [377, 117, 88, 13], 'segmentation': [[377, 117, 463, 117, 465, 130, 378, 130]], 'text': 'Genaxis Theatre'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [493, 115, 26, 16], 'segmentation': [[493, 115, 519, 115, 519, 131, 493, 131]], 'text': '[06]'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [374, 155, 35, 15], 'segmentation': [[374, 155, 409, 155, 409, 170, 374, 170]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [492, 151, 59, 19], 'segmentation': [[492, 151, 551, 151, 551, 170, 492, 170]], 'text': '62-03'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [376, 198, 46, 14], 'segmentation': [[376, 198, 422, 198, 422, 212, 376, 212]], 'text': 'Carpark'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [494, 189, 45, 17], 'segmentation': [[494, 190, 539, 189, 539, 205, 494, 206]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [372, 0, 122, 86], 'segmentation': [[374, 1, 494, 0, 492, 85, 372, 86]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test Warning: skip broken file {'file_name': 'img_3.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 0, 'category_id': 1, 'bbox': [58, 71, 136, 52], 'segmentation': [[58, 80, 191, 71, 194, 114, 61, 123]], 'text': 'fusionopolis'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [147, 21, 29, 15], 'segmentation': [[147, 21, 176, 21, 176, 36, 147, 36]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [326, 75, 65, 38], 'segmentation': [[328, 75, 391, 81, 387, 112, 326, 113]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [401, 76, 47, 35], 'segmentation': [[401, 76, 448, 84, 445, 108, 402, 111]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [780, 6, 236, 36], 'segmentation': [[780, 7, 1015, 6, 1016, 37, 788, 42]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [221, 72, 91, 46], 'segmentation': [[221, 72, 311, 80, 312, 117, 222, 118]], 'text': 'fusionopolis'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [113, 19, 31, 14], 'segmentation': [[113, 19, 144, 19, 144, 33, 113, 33]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [257, 28, 51, 29], 'segmentation': [[257, 28, 308, 28, 308, 57, 257, 57]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [140, 115, 56, 18], 'segmentation': [[140, 120, 196, 115, 195, 129, 141, 133]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [86, 176, 26, 20], 'segmentation': [[86, 176, 110, 177, 112, 189, 89, 196]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [101, 185, 31, 19], 'segmentation': [[101, 193, 129, 185, 132, 198, 103, 204]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [223, 150, 71, 47], 'segmentation': [[223, 175, 244, 150, 294, 183, 235, 197]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [140, 232, 36, 24], 'segmentation': [[140, 239, 174, 232, 176, 247, 142, 256]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test prepare index 2214 with error 'Polygon' object is not iterable prepare index 1109 with error 'Polygon' object is not iterable prepare index 2149 with error 'Polygon' object is not iterable Warning: skip broken file {'file_name': 'img_5.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 1, 'category_id': 1, 'bbox': [405, 409, 32, 52], 'segmentation': [[408, 409, 437, 436, 434, 461, 405, 433]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [435, 434, 8, 33], 'segmentation': [[437, 434, 443, 440, 441, 467, 435, 462]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test Warning: skip broken file {'file_name': 'img_2.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 0, 'category_id': 1, 'bbox': [602, 173, 33, 24], 'segmentation': [[602, 173, 635, 175, 634, 197, 602, 196]], 'text': 'EXIT'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [734, 310, 58, 54], 'segmentation': [[734, 310, 792, 320, 792, 364, 738, 361]], 'text': 'I2R'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test Warning: skip broken file {'file_name': 'img_2.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 0, 'category_id': 1, 'bbox': [602, 173, 33, 24], 'segmentation': [[602, 173, 635, 175, 634, 197, 602, 196]], 'text': 'EXIT'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [734, 310, 58, 54], 'segmentation': [[734, 310, 792, 320, 792, 364, 738, 361]], 'text': 'I2R'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test prepare index 1171 with error 'Polygon' object is not iterable Warning: skip broken file {'file_name': 'img_10.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 1, 'category_id': 1, 'bbox': [260, 138, 24, 20], 'segmentation': [[261, 138, 284, 140, 279, 158, 260, 158]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [288, 138, 129, 23], 'segmentation': [[288, 138, 417, 140, 416, 161, 290, 157]], 'text': 'HarbourFront'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [743, 145, 37, 18], 'segmentation': [[743, 145, 779, 146, 780, 163, 746, 163]], 'text': 'CC22'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [783, 129, 50, 26], 'segmentation': [[783, 129, 831, 132, 833, 155, 785, 153]], 'text': 'bua'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [831, 133, 43, 23], 'segmentation': [[831, 133, 870, 135, 874, 156, 835, 155]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [159, 204, 72, 15], 'segmentation': [[159, 205, 230, 204, 231, 218, 159, 219]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [785, 158, 75, 21], 'segmentation': [[785, 158, 856, 158, 860, 178, 787, 179]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1011, 157, 68, 16], 'segmentation': [[1011, 157, 1079, 160, 1076, 173, 1011, 170]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test prepare index 948 with error 'Polygon' object is not iterable Warning: skip broken file {'file_name': 'img_7.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 1, 'category_id': 1, 'bbox': [345, 130, 56, 23], 'segmentation': [[346, 133, 400, 130, 401, 148, 345, 153]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [301, 123, 50, 35], 'segmentation': [[301, 127, 349, 123, 351, 154, 303, 158]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [869, 61, 54, 30], 'segmentation': [[869, 67, 920, 61, 923, 85, 872, 91]], 'text': 'citi'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [884, 141, 50, 19], 'segmentation': [[886, 144, 934, 141, 932, 157, 884, 160]], 'text': 'smrt'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [634, 86, 182, 35], 'segmentation': [[634, 106, 812, 86, 816, 104, 634, 121]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [418, 112, 53, 36], 'segmentation': [[418, 117, 469, 112, 471, 143, 420, 148]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [634, 107, 149, 28], 'segmentation': [[634, 124, 781, 107, 783, 123, 635, 135]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [634, 117, 210, 38], 'segmentation': [[634, 138, 844, 117, 843, 141, 636, 155]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [468, 117, 57, 26], 'segmentation': [[468, 124, 518, 117, 525, 138, 468, 143]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [301, 162, 231, 39], 'segmentation': [[301, 181, 532, 162, 530, 182, 301, 201]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [296, 147, 104, 27], 'segmentation': [[296, 157, 396, 147, 400, 165, 300, 174]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [420, 136, 107, 27], 'segmentation': [[420, 151, 526, 136, 527, 154, 421, 163]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [616, 250, 41, 35], 'segmentation': [[617, 251, 657, 250, 656, 282, 616, 285]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [695, 243, 43, 35], 'segmentation': [[695, 246, 738, 243, 738, 276, 698, 278]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [739, 241, 24, 21], 'segmentation': [[739, 241, 760, 241, 763, 260, 742, 262]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test prepare index 1017 with error 'Polygon' object is not iterable Warning: skip broken file {'file_name': 'img_6.jpg', 'height': 720, 'width': 1280, 'annotations': [{'iscrowd': 1, 'category_id': 1, 'bbox': [875, 92, 35, 20], 'segmentation': [[875, 92, 910, 92, 910, 112, 875, 112]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [748, 95, 39, 14], 'segmentation': [[748, 95, 787, 95, 787, 109, 748, 109]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [106, 394, 47, 31], 'segmentation': [[106, 395, 150, 394, 153, 425, 106, 424]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [165, 393, 48, 28], 'segmentation': [[165, 393, 213, 396, 210, 421, 165, 421]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [705, 49, 42, 15], 'segmentation': [[706, 52, 747, 49, 746, 62, 705, 64]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [111, 459, 96, 23], 'segmentation': [[111, 459, 206, 461, 207, 482, 113, 480]], 'text': 'Reserve'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [831, 9, 63, 13], 'segmentation': [[831, 9, 894, 9, 894, 22, 831, 22]], 'text': '###'}, {'iscrowd': 0, 'category_id': 1, 'bbox': [641, 454, 52, 15], 'segmentation': [[641, 456, 693, 454, 693, 467, 641, 469]], 'text': 'CAUTION'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [839, 32, 52, 15], 'segmentation': [[839, 32, 891, 32, 891, 47, 839, 47]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [788, 46, 43, 13], 'segmentation': [[788, 46, 831, 46, 831, 59, 788, 59]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [830, 95, 42, 11], 'segmentation': [[830, 95, 872, 95, 872, 106, 830, 106]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [921, 92, 31, 19], 'segmentation': [[921, 92, 952, 92, 952, 111, 921, 111]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [968, 40, 45, 13], 'segmentation': [[968, 40, 1013, 40, 1013, 53, 968, 53]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1002, 89, 29, 11], 'segmentation': [[1002, 89, 1031, 89, 1031, 100, 1002, 100]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1043, 38, 55, 14], 'segmentation': [[1043, 38, 1098, 38, 1098, 52, 1043, 52]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1069, 85, 69, 14], 'segmentation': [[1069, 85, 1138, 85, 1138, 99, 1069, 99]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1128, 36, 50, 16], 'segmentation': [[1128, 36, 1178, 36, 1178, 52, 1128, 52]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1168, 84, 32, 13], 'segmentation': [[1168, 84, 1200, 84, 1200, 97, 1168, 97]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1219, 27, 40, 22], 'segmentation': [[1223, 27, 1259, 27, 1255, 49, 1219, 49]], 'text': '###'}, {'iscrowd': 1, 'category_id': 1, 'bbox': [1264, 28, 15, 18], 'segmentation': [[1264, 28, 1279, 28, 1279, 46, 1264, 46]], 'text': '###'}]} with img_prefix /mmocr/tests/data/toy_dataset/imgs/test
@balandongiv Sry for the late reply. I meet the same problem before on colab and it also throws the same error. This may still be a problem with the path. You could try putting the annotation files and images in one folder, like mmocr does。
|__root_dir
|__imgs
|__img1.jpg
|__img2.jpg
|__ instance_training.json
Thanks for replying @Mountchicken , appreciate it (its really like waiting for Santa). anyhow, I did as recommend. but, the issues still persist and can be reproduce via this notebook.
The file/folder structure is as below
#/mmocr/tests/data/toy_dataset/imgs
———imgs
|__ crops
|__ test
|__img1.jpg
|__img2.jpg
|__ instances_training.txt
The instance_training.txt
content is as below
{"file_name": "img_3.jpg", "height": 720, "width": 1280, "annotations": [{"iscrowd": 0, "category_id": 1, "bbox": [58, 71, 136, 52], "segmentation": [[58, 80, 191, 71, 194, 114, 61, 123]], "text": "fusionopolis"}, {"iscrowd": 1, "category_id": 1, "bbox": [147, 21, 29, 15], "segmentation": [[147, 21, 176, 21, 176, 36, 147, 36]], "text": "###"}, {"iscrowd": 1, "category_id": 1, "bbox": [326, 75, 65, 38], "segmentation": [[328, 75, 391, 81, 387, 112, 326, 113]], "text": "###"}, {"iscrowd": 1, "category_id": 1, "bbox": [401, 76, 47, 35], "segmentation": [[401, 76, 448, 84, 445, 108, 402, 111]], "text": "###"}, {"iscrowd": 1, "category_id": 1, "bbox": [780, 6, 236, 36], "segmentation": [[780, 7, 1015, 6, 1016, 37, 788, 42]], "text": "###"}, {"iscrowd": 0, "category_id": 1, "bbox": [221, 72, 91, 46], "segmentation": [[221, 72, 311, 80, 312, 117, 222, 118]], "text": "fusionopolis"}, {"iscrowd": 1, "category_id": 1, "bbox": [113, 19,}
Have you tested it on a GPU machine? I am able to train the model with your config without any trouble on my machine, but on colab the process does fail on the transform. Maybe there is some minor difference we were not able to realize.
Thanks for responding @gaotongxiao., Right now, I am having issue installing the mmcv-full
in my local machine hence the desperation to train the model
under Colab.
May I know what do you mean by
some minor difference we were not able to realize
Is it from my side or the setting within the mmocr
?
I'd suggest you install mmcv-full
via mim
. First make sure you have PyTorch installed, then:
pip install mim
mim install mmcv-full
And mim
should find the right mmcv-full
package that works best for your configuration.
May I know what do you mean by
some minor difference we were not able to realize
Is it from my side or the setting within the mmocr?
I was not able to identify the reason as well, and the only clue that I have is that your config actually works well locally.
Thanks for replying @gaotongxiao ,
Actually I have installed using the recommended approach. But I got an Getting RuntimeError: box_iou_rotated_impl : implementation for device cuda:0 not found
when running the test file, as I reported here .
Similarly, when running the the following in my local
_base_ = [
'configs/_base_/default_runtime.py',
'configs/_base_/schedules/schedule_sgd_1200e.py',
'configs/_base_/det_models/dbnetpp_r50dcnv2_fpnc.py',
'configs/_base_/det_pipelines/dbnet_pipeline.py',
]
it produce
RuntimeError: modulated_deformable_im2col_impl: implementation for device cuda:0 not found.
I think the issue boil up at implementation for device cuda:0 not found
Hi @gaotongxiao & @Mountchicken ,
I now can confirm the only occur in Google Colab but not in local machine. Thanks for your time
While train the
dbnetpp
, the compiler returnWarning: skip broken file
. Unfortunately, this happen for almost all theimages
. Given limited dataset, I try my best to utilise max images possible.May I know how to resolve this issue?
The full trace-back is as below:
In addition, store the image directly under
colab
directory produce similar issueIs the issue related to here?