nmwsharp / diffusion-net

Pytorch implementation of DiffusionNet for fast and robust learning on 3D surfaces like meshes or point clouds.
https://arxiv.org/abs/2012.00888
MIT License
398 stars 50 forks source link

DiffusionNet's diffusion manipulation. #20

Closed lidan233 closed 2 years ago

lidan233 commented 2 years ago

It's very lucky to read your great paper. I have a few questions.

  1. Can your method be considered as a kind of mean curvature flow? It is very similar to mean curvature flow.
  2. In fact, diffusion manipulation is a smooth manipulation. We notice the shape information is distorted by the smooth manipulations when features get global support. In this case, the global shape information may be distorted (maybe a sphere), so some researchers viewed HKS as a local metric (not only because of limited diffusion time). Do you think distorted global information is OK for global information?
nmwsharp commented 2 years ago

Hi! Thanks for the questions.

  1. Yes, in the sense that if the input data are 'xyz' positions, the first Diffusion layer would transform these in a way that looks just like mean curvature flow. Although, of course in DiffusionNet we never actually update the geometry of the shape, just process signals on the shape.

  2. I'm not sure I totally understand the question here, but it is true that that long-time nearly-global diffusion will eventually drive signals to being extremely smooth, like on the sphere. One way we ensure that high-frequency information is preserved is by including the "residual" connections in the DiffusionNet block, which skip the diffusion layer. This means that even if all diffusion were totally global, high-frequency local signals would still be available to the network. In general, global signals can certainly be useful even if they are imperfect, but it is useful to also couple them with appropriate local data.