I try to re-implement this demo in Pytorch 1.12 using a 2-byte floating format. And then, I found that the elapsed time of the training process is too much even if only "points64_part1" is involved.
My experiment is performed on two GTX2080ti. Or I need a SSD.
It seems that I don't need to train the model with a complete dataset every epoch, just like your code.
I try to re-implement this demo in Pytorch 1.12 using a 2-byte floating format. And then, I found that the elapsed time of the training process is too much even if only "points64_part1" is involved.
My experiment is performed on two GTX2080ti. Or I need a SSD.
It seems that I don't need to train the model with a complete dataset every epoch, just like your code.