THUDM / GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
MIT License
208 stars 40 forks source link

Why use L2 Norm instead of KL divergence #2

Closed bdy9527 closed 4 years ago

bdy9527 commented 4 years ago

In the consistency regularization

wzfhaha commented 4 years ago

Here the L2 norm can be replaced by KL divergence. We adopt L2 norm because it achieves better performance in our experiments.