Closed YhQIAO closed 2 days ago
Due to the GPU memory limitation, we trained the model with batch_size=1 in our experiments. I will work on solving the issues caused by increasing the batch size.
As we follow the KPConv implementation of Geotransformer here, CoFiI2P only supports batch_size=1
now.
We will try our best to support batch_size >1
shortly.
if i change batch_size in eval_all.py as 2, There will be the following error Increasing the batch_sizes will increase the memory usage, but 1200GB is very abnormal. I found that the batch_size in the training stage is also 1 (I haven't run the train code yet). Will this problem occur if the batch size is set to be greater than 1 on your computer?