Open ShawnNew opened 4 years ago
Are your images massive or something? You're not supposed to change max_size in the config, since the model will automatically resize images to that size.
@dbolya, I also have a question in this regard. I have around 1800 images, with around 7061 annotations (currently separate each annotation is separate for each instance of class, even if it is the same class). Initially, most of my images were very large so I resized all of them to have a larger dimension of 640 and to keep their ratio of h to w. I've created a coco annotation json file with masks, bboxes, categories, etc. for each image-instance pairing. I have edited the config with my classes:
`MINC_LABEL_MAP = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 13: 14, 14: 15, 15: 16, 16: 17, 17: 18, 18: 19, 19: 20, 20: 21, 21: 22, 22: 23}
MINC_CLASSES = ("brick","carpet","ceramic","fabric","foliage","food", "glass","hair","leather","metal","mirror","other","painted","paper","plastic", "polishedstone","skin","sky","stone","tile","wallpaper","water","wood")`
as well as to point to these classes:
`yolact_base_config = coco_base_config.copy({ 'name': 'yolact_base',
# Dataset stuff
'dataset': minc_dataset,
'num_classes': len(minc_dataset.class_names) + 1,
# Image Size
'max_size': 550,`
and
`minc_dataset = dataset_base.copy({ 'name': 'MINC',
'train_images': './data/coco/images-minc',
'train_info': './data/minc/instances_all.json',
'Valid_images': './data/coco/images-minc',
'valid_info': './data/minc/instances_all.json',
'has_gt': True,
'label_map': MINC_LABEL_MAP,
'class_names': MINC_CLASSES
})`
I've been training for about 1.5 days using --config=yolact_base_config --batch_size=5 and resuming each time I have to stop it.
My question is two-fold..
`Computing validation mAP (this may take a while)...
Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... Warning: Augmentation output an example with no ground truth. Resampling... `
`Calculating mAP...
| all | .50 | .55 | .60 | .65 | .70 | .75 | .80 | .85 | .90 | .95 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+ box | 0.04 | 0.20 | 0.13 | 0.08 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | mask | 0.01 | 0.03 | 0.02 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | -------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
[103] 36980 || B: 5.843 | C: 4.773 | M: 90.437 | S: 0.340 | T: 101.392 || ETA: 26 days, 2:01:11 || timer: 0.794 [103] 36990 || B: 5.724 | C: 4.789 | M: 90.071 | S: 0.329 | T: 100.913 || ETA: 26 days, 1:32:27 || timer: 0.741 `
If this does not pertain to the above question or if it is too much, feel free to open a new issue or contact me about doing so.
Thank you!
Hi @dbolya ,
I am not changing any other configurations at all, the only changes I did is to add a custom dataset and replace it in the yolact_base_config
field.
Hi @dbolya ,
I just used basic configuration to train on my own dataset (which is in coco format), and I only got around 20 MAP in both box and mask perspectives (30+ epochs). Also, the speed test is not good as well (3 fps for yolact++ with resnet50 as backbone), which is far from the speed test on the same model with the weights you provided on Github.
I think the network may not be converged but I haven't seen some increase for epochs.
What do you suggest to improve the performance?
Thanks.
Originally posted by @ShawnNew in https://github.com/dbolya/yolact/issues/184#issuecomment-601590442