3dpcc / DeepPCC

deep learning
3 stars 0 forks source link

Question about coding dense point clouds #1

Closed net-F closed 11 months ago

net-F commented 11 months ago

Amazing work !

In your coding scheme of DeepPCAC, the KNN self-attention is used after different downscale/upscale layers. How much memory does this cost when coding dense point clouds such as testing point clouds you displayed in your paper ? I notice that, in the decoder, after last upscale layer which has restored the complete geometry of the point cloud, you still used KNN self-attention. Can GPU withstand such large memory consumption ?

3dpcc commented 11 months ago

Thank you for your attention in our work. I chekced my code and found that in the decoding phase, KNN self-attention should be applied before convolution upsampling (Fig. 1 of the SUPPLEMENTARY MATERIAL should apply upsampling first and then KNN in the attribute decoder). In practical testing, for solid point clouds, the GPU memory consumption ranges from 12G to 22G.

net-F commented 11 months ago

Thanks for your replying !

But I'am little confused. You said that in the code, in the decoding phase, KNN self-attention is first applied and then the upsampling convolution is applied. However, the Fig. 1 (b) of the SUPPLEMENTARY MATERIAL applied upsampling first and then KNN in the attribute decoder. Which one should be implemented first ?

May I ask you further about the KNN self-attention? For the point clouds with almost one million points, if query every points' K nearest points parallelly for efficiency, the matrix multiplication of N×3 and 3×N (N is the number of points) will be done, which consume hundreds of GB of the memory, how do handle this problem? If querying in serial, will the training speed be extremely slow?

3dpcc commented 11 months ago

The KNN self-attention should be applied before upsampling convolution. Therefore, the accurate sequence involves decoding features via KNN self-attention and then upsampling convolution.

Regarding KNN self-attention, we leverage pytorch3d.ops.knn_search and pytorch3d.ops.knn_gather for efficient KNN operations. Matrix multiplication incurs minimal GPU memory overhead. Alternatively, you can utilize more efficient matrix multiplication methods, which are also implemented in PyTorch.