drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
508 stars 65 forks source link

Can the model run on smaller GPU (6GB) ? #110

Closed RoxaneChen02 closed 1 month ago

RoxaneChen02 commented 1 month ago

Hi,

I was wondering if this model can work on an NVIDIA RTX A1000 6GB GPU ? I tried to run the code on the S3DIS dataset but I am unable to run the data preprocessing section (I keep on getting CUDA out of memory error) I tried to change some parameters (xy_tiling, pc_tiling etc) as recommended on the README but in vain.

Thank you in advance !

drprojects commented 1 month ago

Hi @RoxaneChen02, as stated in the README, we only provide configs for running on a 11G GPU. 6G is becoming reaaaally tiny, this is still a large-scale 3D deep learning project after all !

Still, if you want to try and tweak your dataset configs to fit in a 6G GPU it might be possible (at the expense of slower training/inference and maybe some lower performance) but I do not guarantee it and cannot provide support for it.

As you have noticed I already provided an extensive list of pointers for adjusting GPU memory cost, depending on where in the pipeline (preprocessing/training/inference) the memory error occurs: https://github.com/drprojects/superpoint_transformer?tab=readme-ov-file#cuda-out-of-memory-errors. How you adjust those to suit your need will depend on your machine specs, your own dataset, downstream application (eg size of the objects you are looking for, minimum reasonable voxel resolution for finding those, etc).

If you find a configuration that fits a 6G memory, feel free to send us a PR to share your findings !

PS: If you ❤️ or use this project, don't forget to give it a ⭐, it means a lot to us !