I trained a ResNet50 using the GTSRB data set (http://benchmark.ini.rub.de/?section=gtsrb&subsection=news), then I "augmented" the previously employed GTSRB data set by randomly generated images of the appropriate dimension and repeated the training procedure. Finally I observed that the "noise-augmented" training procedure resulted in an unreasonable +20% accuracy improvement on the same validation data set. Clearly It is not reasonable, we should expect worse accuracy in the "augmented" case.
This situation holds regardless of the number of epochs. The same situation has also been observed on other data sets, such as CIFAR-10.
I trained a ResNet50 using the GTSRB data set (http://benchmark.ini.rub.de/?section=gtsrb&subsection=news), then I "augmented" the previously employed GTSRB data set by randomly generated images of the appropriate dimension and repeated the training procedure. Finally I observed that the "noise-augmented" training procedure resulted in an unreasonable +20% accuracy improvement on the same validation data set. Clearly It is not reasonable, we should expect worse accuracy in the "augmented" case. This situation holds regardless of the number of epochs. The same situation has also been observed on other data sets, such as CIFAR-10.