Open ramdrop opened 2 years ago
It is weird because, every experiments were run on an 2080 Ti or a 1080 Ti. You can find the training set I generated here : https://cloud.mines-paristech.fr/index.php/s/mXN2RuebKjVMhLz
Thanks for your generated dataset. I have not managed to solve this issue but I found a workaround: split the raw directory list and run it multiple times to preprocess all splits.
Sorry to bother you again, but I found my training results extremely weird: almost zero feature_matching_recall on both val and test dataset after 50 epochs. I suspect this could result from data preprocessing. So other than the training set you provided, would you mind sharing with me your full 3DMatch dataset as follows?
Hello, I run the command and got the output as follows:
command:
output:
I checked the GPU allocated memory recorded by wandb (I tried two different versions of pycuda and both cases resulted in the same error shown above):
Is it normal that the GPU allocated memory keeps increasing during the data preprocessing? I thought A100 with 40GB memory is sufficient for this job. If it isn't, do you know the minimum memory requirement for preprocessing 3DMatch dataset?