longzw1997 / Open-GroundingDino

This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
MIT License
457 stars 71 forks source link

Training -> Validation Scores constantly decreasing #95

Open laurenzheidrich opened 1 month ago

laurenzheidrich commented 1 month ago

First of all, thanks for the great repo!

To understand your repo and training code I have been running the provided example in the Training_Script_example.ipynb on my local workstation with an A6000 GPU. Downloading the data, converting it into the right format and even starting the training works completely fine.

Though, when looking at the metrics during training, I notice that the validation scores are constantly decreasing starting from the first epoch.

Here is the training output of Epoch 0:

image

And the corresponding validation metrics:

image

And here is the training output after Epoch 4:

image

And the corresponding validation metrics:

image

The metrics between epoch 0 and 4 are basically just an interpolation, so the validation results are gradually getting worse. Generally this would mean, that the network is overfitting after already the first epoch, but that doesn't seem right, does it?

I also tried this with a very individual dataset on MRI medical data, and there I observe something very similar. The validation scores in the first epoch are quite low (-> AP around 1%, which makes sense since MRI images are not part of the pretraining data), but then after a few epochs, the AP metrices are all 0.000.

I am not really sure what is going on. Do you have any idea or encountert this behaviour before? Thanks!