Closed Danelrf closed 2 years ago
Hi Danel,
The entire network is trained end-to-end, so the geometric and topological encoders are all definitely supposed to work as one. In the piece of code you pointed out, the hidden_crv_feat
and hidden_srf_feat
tensors are computed as outputs of the convolutional layers self.curv_encoder
and self.surf_encoder
and passed as input node and edge embeddings (2nd and 3rd arguments) to the graph encoder self.graph_encoder
. So they are linked together in the computation graph and enable end-to-end backpropagation. The idea is to learn both geometric and topological features in a way that benefits the downstream task the most (classification in this case). Let me know if you need any further clarifications.
Closing this issue. Feel free to reopen if you have further questions.
Hi all,
I have a question regarding the learning process. On training, UV-Net not only fits the graph weights (topological level) through message passing of the 64-D embeddings on nodes and edges, but also trains these embeddings through regular 2D and 1D convolutions from the original U(V)-grids (geometric level).
My question is: how does a UV-Net model know how to calculate the gradients at the geometric level when carrying out backpropagation, when classification is at the topological (graph level)?
Or put differently, when the weights of graph nodes and edges have been optimized, how does propagation continue all the way back throughout the face/edge embedding process?
I believe there is a disconnect between these two those concepts, but in the source code it seems to work as one thing.
I am currently trying to implement some explainability methods to the model, and this missing link has me stumped.
From the original code:`class UVNetClassifier(nn.Module):
Thank you very much in advance, and apologies if this is not the right place to post this.