rusty1s / pytorch_spline_conv

Implementation of the Spline-Based Convolution Operator of SplineCNN in PyTorch
https://arxiv.org/abs/1711.08920
MIT License
172 stars 36 forks source link

Spline filter #2

Closed emanhamed closed 6 years ago

emanhamed commented 6 years ago

Hi! Is the B-Spline convolution kernels defined per each input features separately (convolution filter for each input feature) or the convolution filter is defined for all the graph nodes and it is used to convolve all the graph features?

rusty1s commented 6 years ago

The B-spline convolution is defined for all graph nodes and convoles all graph features. We do this by parallelizing over the number of edges as well as the number of input features.

emanhamed commented 6 years ago

This means that the convolution kernel is defined randomly and then applied for all the nodes of the graph where the weights are adjusted based on the B-Spline base's functions?

rusty1s commented 6 years ago

I'm not sure if I understand you correctly. The trainable B-spline surface for a single input feature is parametrized by a constant number of trainable weights which are randomly initialized at the start of training. For a single output feature, we have m_in many B-spline kernels, where m_in denotes the number of input features. During training, the parameters get adjusted, so that kernels may detect, e.g., specific edges for learning on 3D meshes. Hope this helps!

emanhamed commented 6 years ago

Thank you so much. The answer is clear.

One more question please, can the pre-trained model of SplineCNN on FAUST dataset be tested on other data (human bodies also) for correspondence?

rusty1s commented 6 years ago

This is an interesting question! I see no technical reason why the model could not be applied to other datasets. However, validating or interpreting the results on different datasets might be an issue, as you evaluate against the FAUST correspondences.

emanhamed commented 6 years ago

Exactly! I have tried on other data and the model worked but the results are not that good. That is why I'm asking is there any constraints regarding the meshing of the 3D object while using the pre-trained model on other data? My understanding is that the meshing will be somehow useful while extracting the neighbourhood vertices for each patch .. which is different from FAUST than my data. What do you think ?

rusty1s commented 6 years ago

A few thoughts:

emanhamed commented 6 years ago

This means that the meshes don't have to have the exact meshing. Instead, regular meshing should be enough. And by default, the same number of vertices as FAUST dataset. Right?

rusty1s commented 6 years ago

Yes, regular meshing should be enough (without guarantees :P). You do not even need the same number of vertices as FAUST. The model shouldn't have a problem with more or less nodes. All it does is searching for the best correspondent node of FAUST for every node in your model.

emanhamed commented 6 years ago

Thank you so much, Matthias. Your answers were really helpful :)

rusty1s commented 6 years ago

You're welcome. Closing this for now. Feel free to write back :)