qq456cvb / Point-Transformers

Point Transformers
MIT License
605 stars 102 forks source link

Point clouds size #34

Open danlapko opened 1 year ago

danlapko commented 1 year ago

Hengshuang et al mention in their work "we apply selfattention locally, which enables scalability to large scenes with millions of points". But this implementation could be hardly trained for num_point>8k (nvidia rtx 3090). Any suggestions on how to train/apply this implementation for large point clouds?

qq456cvb commented 1 year ago

You may borrow some ideas from PointNet++, which splits the whole area into a set of chunks, e.g., 1m x 1m x 1m. Then within each chunk, use the PointTransformer to predict the label for each point. You may find more details about this in the code provided by PointNet++ (https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py).

danlapko commented 1 year ago

Got it, thank you!