The main iteration step should be
u = D_inv * b + D_inv * W * u + lambda * (u - (D * u).sum(0) / D.sum())
Below, I provide the reimplemented results for MNIST, with optimal parameter $\lambda=0.2$ as provided in Table 5 in the paper. However, I have doubt with the parameter $\epsilon$ when calculating the weight matrix, $w{ij}=\exp(-4|x{i}-x{j}|^{2}/d{K}(x_{i})^{2})+\epsilon$. In the paper $\epsilon$ is set to $1$.
$\epsilon=1$ (in paper)
$\epsilon=0$
V-Poisson ($\lambda=0.2$)
92.68
86.47
Poisson
90.98
93.36
However, for Poisson Learning the result is far better when $\epsilon=0$, even better than V-Poisson. Since the original Poisson learning paper does not have this parameter $\epsilon$, I wonder why the authors add $\epsilon$ and doubt if it is a fair comparison with Poisson Learning.
For V-Poisson learning on CIFAR-10, we directly use Poisson learning on the weight matrix which we subtract the elements greater than 0 from their minimum value.
Dear authors,
Any idea when to release the code on github?
Regarding the code you release on OpenReview
The main iteration step should be
u = D_inv * b + D_inv * W * u + lambda * (u - (D * u).sum(0) / D.sum())
Below, I provide the reimplemented results for MNIST, with optimal parameter $\lambda=0.2$ as provided in Table 5 in the paper. However, I have doubt with the parameter $\epsilon$ when calculating the weight matrix, $w{ij}=\exp(-4|x{i}-x{j}|^{2}/d{K}(x_{i})^{2})+\epsilon$. In the paper $\epsilon$ is set to $1$.
However, for Poisson Learning the result is far better when $\epsilon=0$, even better than V-Poisson. Since the original Poisson learning paper does not have this parameter $\epsilon$, I wonder why the authors add $\epsilon$ and doubt if it is a fair comparison with Poisson Learning.