l'm wondering if you have an optimized implementation of a convolution in a full spectral domain filter (instead of chebyshev expansion) ?
x ∗ G g = U.T ( diag(w_g)Ux) (link : page3 https://arxiv.org/pdf/1506.05163.pdf)
such that :
x: input
spectral multipliersw_g = ( w_1 ,...,w_N )U: eigenvectors
U.T: transposed eigenvectors
Here is what l've tried
def graph_convolution(x,U,lambda):
'''
graph convolution layer on full spectral domain filter for graph classification
x : dimension( n,z). where n is the number of nodes and z the number of features per node
U : eignevectors dim (n,n). where n is the number of nodes
lambda : diagonal matrice of eigenvalues dim(n,n). where n is the number of nodes
'''
x1=lambda*x
x2= U*x1
# Let's say n=32 and z=3 then x2=[32,3]
# convolution layer input : 32 output 100
cl1 = nn.Linear(32,100,3) # pytorch version
x=cl1(x2) # output x is of dim [100,3]
return x
Hello @mdeff
Let me thank you for this notebook showing different graph convolution implementation.
https://github.com/mdeff/cnn_graph/blob/master/trials/1_learning_filters.ipynb
l'm wondering if you have an optimized implementation of a convolution in a full spectral domain filter (instead of chebyshev expansion) ?
x ∗ G g = U.T ( diag(w_g)Ux)
(link : page3 https://arxiv.org/pdf/1506.05163.pdf) such that :x
: input spectral multipliersw_g = ( w_1 ,...,w_N )
U
: eigenvectorsU.T
: transposed eigenvectorsHere is what l've tried
Please correct me. Thank you,