dvirginz / DPC

37 stars 9 forks source link

Evalutating on a larger point cloud #1

Closed Sentient07 closed 2 years ago

Sentient07 commented 2 years ago

Hello,

Congrats on the great work! Can you please suggest what if I'd want to run this method on a dataset with potentially 5k / 8k / 200k vertices? For example, if I'd want to compare your method on the FAUST or SHREC'19 original dataset, how should I proceed?

Thanks!

itailang commented 2 years ago

Dear @Sentient07,

Thank you for your interest in our work!

First, note that in the supplementary material, we show that a model trained on 1k points can be applied to 4k points, with a small drop in performance.

Second, since the model learns a smooth embedding field across neighboring points, you can do the following steps:

Please feel free to share further questions if any.

Sentient07 commented 2 years ago

Hello @itailang thanks for your quick response,

Propagate features to the non-sampled points by KNN interpolation

Does this mean that the bottleneck is actually with the number of points the feature extractor can handle at once and not the construction of S_ij matrix?

itailang commented 2 years ago

Dear @Sentient07,

Suppose that the number of points is n and the feature dimension is c. Then, the size of the feature matrix F is (n, c) and the size of the similarity matrix S is (n, n).

If you highly increase the number of point, then n will be much larger then c and the resource limitation will arise from the similarity matrix.

If your GPU has enough memory, you may run our method as is. If not, you can compute the latent nearest neighbor search outside the GPU. You may also use less points for feature extraction on the GPU and propagate the features outside the GPU, as I suggested. Note that the suggestion is for evaluation only, not for training.