nmwsharp / diffusion-net

Pytorch implementation of DiffusionNet for fast and robust learning on 3D surfaces like meshes or point clouds.
https://arxiv.org/abs/2012.00888
MIT License
398 stars 50 forks source link

Classification #6

Closed lucacarotenuto closed 2 years ago

lucacarotenuto commented 2 years ago

Hi,

as of now, only the segmentation example is online, so I'm wondering how to adapt this for classification.

In the paper you write: "Various activations can be appended to the end of the network based on the problem at hand, such as a softmax for segmentation, or a global mean followed by a softmax for classification"

Does that mean I can keep the last_activation as lambda x : torch.nn.functional.log_softmax(x,dim=-1) and I need to add a layer for global mean before? What is exactly meant as global mean? I assume you do not mean average pooling as you write in the paper, no pooling was needed.

Thanks for your help!

nmwsharp commented 2 years ago

Hi!

You've basically got it right, for classification you should not use any activation per-vertex at the end of the network, then do a global mean-pool, then take a softmax to get class scores.

I was actually working on getting our classification example up this week, so hopefully I'll have some code to point you towards soon! I may also add an additional option to the DiffusionNet constructor to make this automatic.

nmwsharp commented 2 years ago

FYI, there's now a classification example here: https://github.com/nmwsharp/diffusion-net/tree/master/experiments/classification_shrec11

I added an additional option to the DiffusionNet constructor, you can pass defined_on='global_mean' to produce a single output vector for classification as discussed above.