Closed charithmu closed 3 years ago
What batch size are you using?
I'm using the default configs from ml3d/configs
randlanet_s3dis.yml => batch_size = 2 randlanet_semantickitti.yml => batch_size = 1 kpconv_s3dis.yml => batch_size = 4 kpconv_semantickitti.yml => batch_size = 1
All four config end up in same error. (Out of GPU memory)
Coming back with my own findings. RandLaNet usually takes bit more than 8GB memory and KPConv takes just above 4GB memory for training with default settings. Inferencing takes less though. I will update with proper numbers and configs later here so that new-comers can quickly assess which hardware is needed to start experimenting.
@charithmu
Do you finally have a best setting for config file yet?
I have also 4GB dedicated graphics memory. I was able to run the train (RandLaNet with S3DIS and torch) by -num_points: 10240 #40960 #20480 -batch_size: 2
For tensorflow, I have no luck.
Hello, Probably coming back very late to the issue. I had actually problems with running trainings with 4GB memory and switched to a better computer for my project. However, prediction can be done with 4GB GPU without no problems. RandLAnet and KPConv took around 1.4GB and 1.7GB respectively as I recall. I used smaller num_points around 60K. All tests were done in tensorflow. I am sorry if this info is not much helpful. We can close the issue if other also agree to do so.
Closing due to inactivity.
Cannot run any of the two models KPConv or RandLaNet with S3DIS or SemanticKITTI because CUDA error of running out of memory. Can someone confirm ~4GB dedicated graphics memory is enough or not to hold the models?
Current Graphics Card: Quadro T2000 with Max-Q Design Total Memory:: 4096MB Total Dedicated Memory: 3914 MB
I have tried many different combinations so far such as run_pipeline.py, examples jupyter books, etc. Non of the models can continue beyond epoch=0. I have also tried CUDA allow memory growth trick as well. Still it grows until >3600MB and gives the above error.