Open mburges-cvl opened 1 year ago
Is MMDetection reading your model correctly? The fact that your own model reads it, doesn't mean MMDet is also reading it. I would check dataset reading with and without augmentations. To read the dataset, without having to train a model, you can use either MMDet tool browse_dataset.py
(check here https://github.com/open-mmlab/mmdetection/issues/10480#issuecomment-1593338077) or my script, which doesn't write out the augmented images, but it checks train, val and test at the same time (browse_dataset.py
only checks train) https://github.com/open-mmlab/mmdetection/issues/10525#issuecomment-1594639495. LMK if this helped!
Hello,
I have tried your code and it outputs the following:
n_images my_object dataset 0 2754 17943 train 1 804 4112 val 2 804 4112 test
which is correct. (Val == Test, here) Also, browse_dataset.py does output the correct images with bounding boxes (on the training set, with and without augmentations).
So I would argue, that the dataset is loaded correctly, got any other idea, why the training does not work?
Thank you for your help!
which is correct. (Val == Test, here) Also, browse_dataset.py does output the correct images with bounding boxes (on the training set, with and without augmentations).
How did you get browse_dataset.py
to output the images without augmentations? Did you comment the relevant section(s) in the config file, or is there an argument one can pass it, that tells it to ignore augmentations?
I commented the relevant sections.
I commented the relevant sections.
Got it, thanks.
So I would argue, that the dataset is loaded correctly, got any other idea, why the training does not work?
I'm sorry, but I can't help you here. However, in this issue you can find a couple RTMDet config files that work for custom datasets. Hopefully you could adapt them to yours.
Faced a similar problem, probably a problem with the data upload pipeline. Are there any successes?
Hello,
I am new to the MMDetection framework and I would like to train different models on my dataset, to compare their performance with my model. I used this tutorial:
https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-with-customized-datasets
, but none of the selected models (Faster-RCNN, YoloX, Dino, ...) learned anything, every time the COCO Metric looks like this:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 │ Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.001 │ Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.000 │ Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 │ Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000 │ Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.000 │ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.013 │ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.035 │ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.035 │ Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.010 │ Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.036 │ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.033
with essentially nothing learned. However, when I train my own model (not in MMDetection) it does work, and I get a mAP@50 of over 0.6. So I don't think the error is in the dataset. This is my config for my dataset:
And this is the config for my faster-rcnn:
I have tried:
It seems like I am missing an obvious step, but currently out of ideas. Does anybody have an idea?
Thanks!
my_conda_env.txt