Closed GlebSBrykin closed 3 years ago
Hi! In principle there should be nothing stopping you other than memory constraints. My advice would be to decrease the batch size to as small as possible, which will increase training time but decrease memory usage---just make sure to decrease the learning rate accordingly as well. Also, if your GPU supports mixed-precision training, consider using the --mixed-precision flag as well.
Closing this issue now, feel free to open another if you have more questions!
I would like to discuss the process of learning the robust version of VGG19 on the full ImageNet. So, I have an NVIDIA video card with 3 GB of memory and 8 GB of RAM. Is it really possible to start the learning process in such conditions(the training time is unlimited, the opportunity itself is important)?