Closed cqray1990 closed 2 years ago
samples_per_gpu=20, workers_per_gpu=2,
In our experience, batchsize can be reach 8 in tesla v100 (16G). It seems make sense that batchsize=4 in 2080ti(11G).
casuse i see your default = 20, so i doubt something wrong with my machine