It seems that the training process need 60w iterations.
It seems that the memory usage of each gpu is not very high during the training process. (4.3G for 24G RTX3090)
Is there any way to increase the memory usage and therefore accelerate the training process?
python ./main.py --config ./configs/rectified_flow/cifar10_rf_gaussian_ddpmpp.py --eval_folder eval --mode train --workdir ./logs/1_rectified_flow
It seems that the training process need 60w iterations.
It seems that the memory usage of each gpu is not very high during the training process. (4.3G for 24G RTX3090) Is there any way to increase the memory usage and therefore accelerate the training process?