Closed adhikarirsr closed 2 years ago
Hi and thanks for your interest! The GMMConv
is a bit different from classical CNNs since it will learn kernel_size
Gaussians which define the kernel points (in fact, it's similar to attention based on edge features).
Those are accessible via self.mu
and self.sigma`, and you can plot them similar to Figure 1 in the MoNet paper.
Therefore:
separate_gaussians=False
, the Gaussians are shared across the features, so it will only learn 5 filters (which corresponds to the original MoNet formulation).
How to visualize filters learned by
GMMConv
?I have
GMMConv
layers:self.conv1 = GMMConv(1, 16,dim=2, kernel_size=5,separate_gaussians=True) self.conv2 = GMMConv(16, 1,dim=2, kernel_size=5,separate_gaussians=True)
Some questions:
5X5
filters, right?separate_gaussians=False
, doesGMMConv
act like isotropic GNNs?