I have trained a model from pre-trained coco weights, with approximately 960 training images and 320 validation images, applying data augmentation with imgaug and different training stages ('heads', '4 +', 'all' ). I did the training both on a pc with a P4000 GPU and in google colaboratory with a P100 GPU. The results showed overfitting in the Google Colab environment.
The results are not much better with the GPU P4000 but I get better training and validation loss graphs, but my question is why are not better the results with a GPU with greater graphics memory capacity? I have tried changing the parameters of images per GPU, steps per epoch, decreasing the learning rate, but I still get similar results where the value of loss of validation begins to increase in the last training epochs with the P100. Are there any parameters that Mask R-CNN adjusts automatically according to computational capacity?
I have trained a model from pre-trained coco weights, with approximately 960 training images and 320 validation images, applying data augmentation with imgaug and different training stages ('heads', '4 +', 'all' ). I did the training both on a pc with a P4000 GPU and in google colaboratory with a P100 GPU. The results showed overfitting in the Google Colab environment. The results are not much better with the GPU P4000 but I get better training and validation loss graphs, but my question is why are not better the results with a GPU with greater graphics memory capacity? I have tried changing the parameters of images per GPU, steps per epoch, decreasing the learning rate, but I still get similar results where the value of loss of validation begins to increase in the last training epochs with the P100. Are there any parameters that Mask R-CNN adjusts automatically according to computational capacity?
Google Colab (Tesla P100): Nvidia P4000:
I appreciate your responses.