OliverTryding / -RE-POLYGCL

A reproduction of the paper "POLYGCL: GRAPH CONTRASTIVE LEARNING VIA LEARNABLE SPECTRAL POLYNOMIAL FILTERS".
1 stars 0 forks source link

Questions #10

Open lin-uice opened 1 month ago

lin-uice commented 1 month ago

$\gammai^H=\sum{j=0}^i\gamma_j,\quad\gamma_i^L=\gamma0-\sum{j=1}^i\gammaj,\quad i=1,\ldots,K$ ,why the former represent high? I have polt the default seting of $\begin{aligned}f\theta\left(\sum_{k=0}^Kw_k^LTk(\tilde X)\right)\end{aligned}$ and $\begin{aligned}f\theta\left(\sum_{k=0}^Kw_k^HT_k(\tilde X)\right)\end{aligned}$ : image and got the ws_H and ws_L: tensor([ 2.0000e+00, 8.8857e-01, -2.1674e-08, 9.5838e-02, -1.0837e-07, 3.2042e-02, -4.3349e-07, 1.3890e-02, -1.5172e-07, 5.5638e-03, -9.7535e-08]) tensor([ 2.0000e+00, -8.8857e-01, -4.0640e-08, -9.5838e-02, -8.5004e-08, -3.2042e-02, -5.4186e-07, -1.3889e-02, -2.2216e-07, -5.5641e-03, 1.5308e-07]) the $ws_H$ dot $T_K(L)$ present the high. so why? I am very hopeful for your replyment

lin-uice commented 1 month ago

the ditailed: polygcl questions english.pdf

MicheleCattaneo commented 1 month ago

I am not sure if I understand your question. I assume your confusion is about why the blue line represents a high pass filter while the orange one a low pass filter. A spectral GNN has this shape:

$$U g_{\theta}(\Lambda)U^T x$$

Where $U$ comes from the diagonalization $L=U\Lambda U^T$ of the graph Laplacian $L$. What you want to learn is the filter $g_{\theta}$ that provides a certain response for the eigenvalues of the Laplacian. If the response is high for large input values, you have a high-pass filter, while if you have a high response for small values, you have a low-pass filter. This is because, in the spectral domain, small eigenvalues are associated with low frequencies of the graph, while large eigenvalues are associated with high frequencies. As an example, see Figure 1 in this paper

This paper is also very good to get to know these topics a bit more.

Then, PolyGCL is based on ChebNet, which approximates the first formula with a polynomial expansion, but the idea is the same.

I hope I understood what your doubt was, otherwise, please try to explain again :)

lin-uice commented 1 month ago

Thanks for your replyment! I am very sorry for my poor English. my doubt is bellow:

Question

Reparameter $\gammai^H=\sum{j=0}^i\gamma_j,\quad\gamma_i^L=\gamma0-\sum{j=1}^i\gamma_j,i=1,...,K,$ $\mathbf{Z}L=f\theta\left(\sum_{k=0}^Kw_k^LT_k(\hat{\mathbf{L}})\mathbf{X}\right),\quad\mathbf{Z}H=f\theta\left(\sum_{k=0}^Kw_k^HTk(\hat{\mathbf{L}})\mathbf{X}\right)$ ,why first represent low-pass and second represent high-pass? More specifically:let : $Z(\lambda)=\sum{k=0}^Kw_kT_k({\mathbf{\lambda}})$ . The papers said that $\gammai^H\leq\gamma{i+1}^H$ guaranteeing high pass. But the the coefficient of every $\lambda$ minght means high pass or low pass(sorry for my poor presentation).The derivation of Z is greater than zero?(like your point) Pasted image 20240818094018

So I do such three exp: exp1 Let $\gamma{0}^H=0$, $\gamma{0}^L=2$ , $\gamma{i}=\frac{2}{K}$ .K=10. Got $w{H}$ and $w_{L}$ :

# $w^H$
tensor([ 2.0000e+00,  8.8857e-01, -2.1674e-08,  9.5838e-02, -1.0837e-07,
         3.2042e-02, -4.3349e-07,  1.3890e-02, -1.5172e-07,  5.5638e-03,
        -9.7535e-08])
# w^L
tensor([ 2.0000e+00, -8.8857e-01, -4.0640e-08, -9.5838e-02, -8.5004e-08,
        -3.2042e-02, -5.4186e-07, -1.3889e-02, -2.2216e-07, -5.5641e-03,
         1.5308e-07])

we can got the the $w$ is periodic function(It because $T{k}(\lambda)$ is a periodic function). And when k is odd, $w{k}^H$ is high than right ($w{k+1}^H$) (the even) and it is positive. exp2 For Z,let $\lambda$ =1,Can get:( $Z{H}(1)=$ tensor(2.0359), $Z{L}(1)=$ tensor(-0.0359)): exp 3 because $\lambda$ in (-1,1), the setting is follow exp1. plot $Z{H}(\lambda)$ and $Z_{L}(\lambda)$ : Pasted image 20240817100835

Change $\gamma{L},\gamma{H}$ initial and $\gamma{i}$, the high-pass or low-pass will not change. It indicate : $Z{H}$ and $Z_{L}$ represent high and low. So why?

Appendix:

utils:

from torch_geometric.nn import MessagePassing
from torch_geometric.utils import get_laplacian
import torch.nn as nn
import torch
import torch.nn.functional as F

def get_cheb_i(x, i):
    if i == 0:
        return 1
    elif i == 1:
        return x
    else:
        T0 = 1
        T1 = x
        for ii in range(2, i + 1):
            T2 = 2 * x * T1 - T0
            T0, T1 = T1, T2
        return T2
def prefix_sum(gammas,gamma_0):
    gammas_H=torch.cat([gamma_0,gammas],dim=0)
    #cumulative sum:累计求和,这个真好用
    return torch.cumsum(gammas_H,dim=0)
def prefix_diff(gammas,gamma_0):
    gammas_L=torch.cat([gamma_0,-gammas],dim=0)
    return torch.cumsum(gammas_L,dim=0)
def get_ws(prefixed_gammas,x_j,K):
    ws=prefixed_gammas.clone()
    ws.fill_(0)
    for k in range(0,K+1):
        for j in range(0,K+1):
            ws[k]=ws[k]+prefixed_gammas[j]*get_cheb_i(x_j[j],k)
        ws[k]=2*ws[k]/(K+1)
    return ws
def get_poly(x,K,gamma_H,gamma_L):
    x_j = torch.Tensor([(j + 0.5) / (K + 1) * torch.pi for j in range(0, K + 1)][::-1])
    print(x_j)
    x_j = torch.cos(x_j)
    print(x_j)
    gammas=torch.Tensor(K)
    gammas.fill_(2/K)#default 2
    gamma_H=torch.Tensor([gamma_H])
    gamma_L=torch.Tensor([gamma_L])
    prefix_gammas_H=prefix_sum(F.relu(gammas),gamma_H)
    prefix_gammas_L=prefix_diff(F.relu(gammas),gamma_L)
    prefix_gammas_L=F.relu(prefix_gammas_L)
    ws_H=get_ws(prefix_gammas_H,x_j,K)
    ws_L=get_ws(prefix_gammas_L,x_j,K)
    #exp 2,export w_{H} and w_{L}
    print(ws_H)
    print(ws_L)
    out_H=ws_H[0]/2
    out_L=ws_L[0]/2
    for k in range(1,K+1):
        out_H=out_H+ws_H[k]*get_cheb_i(x,k)
        out_L=out_L+ws_L[k]*get_cheb_i(x,k)
    return out_H,out_L

exp1,exp2

get_poly(1,10,0,2)

exp3

import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-1, 1, 100)
y=[]
y_H,y_L=get_poly(x,K=10,gamma_H=0,gamma_L=2)
plt.plot(x,y_H,label='Z_H')
plt.plot(x,y_L,label='Z_L')
plt.legend()