azminewasi / Awesome-Graph-Research-ICLR2024

It is a comprehensive resource hub compiling all graph papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
MIT License
79 stars 6 forks source link
artificial-intelligence awesome awesome-list awesome-lists deep-learning deep-neural-networks equivariant-network gnn grap-clustering graph graph-algorithms graph-neural-network graph-neural-networks graph-transformer iclr iclr2024 machine-learning semisupervised-learning supervised-learning

Awesome ICLR 2024 Graph Paper Collection

This repo contains a comprehensive compilation of graph papers that were presented at the esteemed International Conference on Learning Representations (ICLR) in the year 2024. Graph or Geometric machine learning possesses an indispensable role within the domain of machine learning research, providing invaluable insights, methodologies, and solutions to a diverse array of challenges and problems. Whether it entails pioneering architectures, optimization techniques, theoretical analyses, or empirical investigations, these papers make substantial contributions towards the advancement of the field. All the papers categorised in different subtopics.

All Papers:

View paper list! - [**Heterophily**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#heterophily) - [Locality-Aware Graph Rewiring in GNNs](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#locality-aware-graph-rewiring-in-gnns) - [Probabilistically Rewired Message-Passing Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#probabilistically-rewired-message-passing-neural-networks) - [**Graph Transformer**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#graph-transformer) - [Training Graph Transformers via Curriculum-Enhanced Attention Distillation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#training-graph-transformers-via-curriculum-enhanced-attention-distillation) - [Transformers vs. Message Passing GNNs: Distinguished in Uniform](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#transformers-vs-message-passing-gnns-distinguished-in-uniform) - [Polynormer: Polynomial-Expressive Graph Transformer in Linear Time](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polynormer-polynomial-expressive-graph-transformer-in-linear-time) - [**Spectral/Polynomial GNN**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#spectralpolynomial-gnn) - [Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#learning-adaptive-multiresolution-transforms-via-meta-framelet-based-graph-convolutional-network) - [PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polygcl-graph-contrastive-learning-via-learnable-spectral-polynomial-filters) - [**Shape-aware Graph Spectral Learning**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#shape-aware-graph-spectral-learning) - [HoloNets: Spectral Convolutions do extend to Directed Graphs](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#holonets-spectral-convolutions-do-extend-to-directed-graphs) - [**Text-attributed Graph**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#text-attributed-graph) - [Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#harnessing-explanations-llm-to-lm-interpreter-for-enhanced-text-attributed-graph-representation-learning) - [**Equivariant GNNs**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#equivariant-gnns) - [Orbit-Equivariant Graph Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#orbit-equivariant-graph-neural-networks) - [Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#rethinking-the-benefits-of-steerable-features-in-3d-equivariant-graph-neural-networks) - [Clifford Group Equivariant Simplicial Message Passing Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#clifford-group-equivariant-simplicial-message-passing-networks) - [Graph Neural Networks for Learning Equivariant Representations of Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#graph-neural-networks-for-learning-equivariant-representations-of-neural-networks) - [**Theory, Weisfeiler & Leman go**](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#theory-weisfeiler--leman-go) - [G^2N^2: Weisfeiler and Lehman go grammatical](#g2n2--weisfeiler-and-lehman-go-grammatical) - [Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness](#beyond-weisfeiler-lehman-a-quantitative-framework-for-gnn-expressiveness) - [**GDiffusion-based generation**](#gdiffusion-based-generation) - [Graph Generation with $K^2$-trees](#graph-generation-with--k2-trees) - [**Contrastive Learning**](#contrastive-learning) - [PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polygcl-graph-contrastive-learning-via-learnable-spectral-polynomial-filters) - [A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks]() - [**Proteins**](#a-graph-is-worth-1-bit-spikes-when-graph-contrastive-learning-meets-spiking-neural-networks) - [Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction](#rigid-protein-protein-docking-via-equivariant-elliptic-paraboloid-interface-prediction) - [**Proteins,Crystals and Material Generation**](#proteinscrystals-and-material-generation) - [Space Group Constrained Crystal Generation](#space-group-constrained-crystal-generation) - [Scalable Diffusion for Materials Generation](#scalable-diffusion-for-materials-generation) - [MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design](#mofdiff-coarse-grained-diffusion-for-metal-organic-framework-design) - [**Causality**](#causality) - [Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks](#causality-inspired-spatial-temporal-explanations-for-dynamic-graph-neural-networks) - [**Anomaly Detection**](#anomaly-detection) - [Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection](#rayleigh-quotient-graph-neural-networks-for-graph-level-anomaly-detection) - [Boosting Graph Anomaly Detection with Adaptive Message Passing](#boosting-graph-anomaly-detection-with-adaptive-message-passing) - [**LLM**](#llm) - [Talk like a Graph: Encoding Graphs for Large Language Models](#talk-like-a-graph-encoding-graphs-for-large-language-models) - [Label-free Node Classification on Graphs with Large Language Models (LLMs)](#label-free-node-classification-on-graphs-with-large-language-models-llms)


Heterophily

Locality-Aware Graph Rewiring in GNNs

The main contribution of this work is a novel graph rewiring framework that simultaneously reduces over-squashing, respects graph locality, and preserves sparsity, outperforming existing techniques on real-world benchmarks.

Details - **Abstract**: Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors. While exchanging messages over the input graph endows GNNs with a strong inductive bias, it can also make GNNs susceptible to *over-squashing*, thereby preventing them from capturing long-range interactions in the given graph. To rectify this issue, graph rewiring techniques have been proposed as a means of improving information flow by altering the graph connectivity. In this work, we identify three desiderata for graph-rewiring: **(i) reduce over-squashing, (ii) respect the locality of the graph, and (iii) preserve the sparsity of the graph**. We highlight fundamental trade-offs that occur between spatial and spectral rewiring techniques; while the former often satisfy (i) and (ii) but not (iii), the latter generally satisfy (i) and (iii) at the expense of (ii). We propose a novel rewiring framework that satisfies all of (i)--(iii) through a locality-aware sequence of rewiring operations. We then discuss a specific instance of such rewiring framework and validate its effectiveness on several real-world benchmarks, showing that it either matches or significantly outperforms existing rewiring approaches. - **OpenReview**: https://openreview.net/pdf?id=4Ua4hKiAJX ![](fig/4Ua4hKiAJX.jpg)

Probabilistically Rewired Message-Passing Neural Networks

Develops probabilistically rewired Message-passing graph neural networks (PR-MPNNs). PR-MPNNs enhance predictive power by dynamically adjusting graph structures, outperforming traditional MPNNs and graph transformers.

Details



Graph Transformer

Training Graph Transformers via Curriculum-Enhanced Attention Distillation

Curriculum-enhanced attention distillation method for semi-supervised node classification, leveraging Local and Global Graph Transformers, outperforming state-of-the-art approaches on multiple benchmarks.

Details

Transformers vs. Message Passing GNNs: Distinguished in Uniform

Graph Transformers and MPGNNs are incomparable in terms of uniform function approximation while neither is "universal" in this setting.

Details

Polynormer: Polynomial-Expressive Graph Transformer in Linear Time

A linear graph transformer that performs well on homo/heterophilic graphs by learning high-degree equivariant polynomials.

Details



Spectral/Polynomial GNN

Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network

It proposes MM-FGCN, a novel framework designed to learn adaptive graph multiresolution transforms, resulting in the attainment of state-of-the-art performance in various graph representation learning tasks.

Details

PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters

They introduce spectral polynomial filters into graph contrastive learning to model heterophilic graphs.

Details



Shape-aware Graph Spectral Learning


HoloNets: Spectral Convolutions do extend to Directed Graphs

The paper extends spectral convolutions to directed graphs. Corresponding networks are shown to set SOTA on heterophilic node classification tasks and to be stable to topological perturbations.

Details


Text-attributed Graph


Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning

The paper proposes the first framework that leverages LLMs to enhance representation learning on text-attributed graphs, achieving SOTA results on four benchmark datasets.

Details


Equivariant GNNs


Orbit-Equivariant Graph Neural Networks

The paper defines orbit-equivariance, a relaxation of equivariance, to enable solving a new class of problems and propose some orbit-equivariant GNNs

Details


Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks

The paper discusses the benefits of steerable features of different types for 3D equivariant graph neural networks

Details

Clifford Group Equivariant Simplicial Message Passing Networks

The main contribution of this work is the development of Clifford Group Equivariant Simplicial Message Passing Networks, which integrate Clifford group-equivariant layers with simplicial message passing, achieving superior performance on geometric tasks by leveraging geometric features and efficiently sharing parameters across dimensions.

Details

Graph Neural Networks for Learning Equivariant Representations of Neural Networks

We propose graph neural networks that learn permutation equivariant representations of other neural networks

Details



Theory, Weisfeiler & Leman go

$G^2N^2$ : Weisfeiler and Lehman go grammatical

The main contribution of this work is the development of $G^2N^2$, a Graph Neural Network (GNN) model derived from Context-Free Grammars (CFGs) and proven to be third-order Weisfeiler-Lehman (3-WL) compliant, showcasing superior efficiency compared to other 3-WL GNNs across various downstream tasks, with a demonstrated benefit of grammar reduction within the framework.

Details

Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness

This work introduces a novel framework for quantitatively assessing the expressiveness of Graph Neural Networks (GNNs) using homomorphism expressivity, providing insights into their abilities such as subgraph counting and offering empirical validation on both synthetic and real-world tasks.

Details



GDiffusion-based generation

Graph Generation with $K^2$-trees

They propose a new graph generative model based on the $K^2$-tree, which is a compact and hierarchical representation for graphs.

Details



Contrastive Learning

PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters

They introduce spectral polynomial filters into graph contrastive learning to model heterophilic graphs.

Details

A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks

They propose a novel spiking graph contrastive learning framework to learn binarized 1-bit representations for graphs, making balanced trade-offs between efficiency and performance.

Details



Proteins

Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction

The main contribution of this work is ElliDock, a novel learning-based method for protein-protein docking that predicts elliptic paraboloid interfaces, ensuring fast inference time and outperforming state-of-the-art methods, particularly in antibody-antigen docking scenarios.

Details



Proteins,Crystals and Material Generation


Space Group Constrained Crystal Generation

The main contribution of this work is DiffCSP++, a diffusion model enhanced from DiffCSP, which incorporates the spacegroup constraint into crystal generation, leading to improved performance on crystal structure prediction and ab initio crystal generation compared to existing methods.

Details

Scalable Diffusion for Materials Generation

We scale up diffusion models on a novel unified representation of crystal structures and generate orders of magnitude more novel stable materials verified by Density Function Theory calculations compared to previous generative modeling approaches.

Details

MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design

They develop a generative model for metal organic frameworks using a coarse-grained diffusion approach to discover carbon capture materials and validate them through molecular simulations.

Details



Causality

Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks

Innovative causality-inspired generative model enhances interpretability of DyGNNs.

Details



Anomaly Detection

Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection

They propose RQGNN, a spectral GNN for graph-level anomaly detection, leveraging Rayleigh Quotient learning and CWGNN-RQ framework for improved performance.

Details

Boosting Graph Anomaly Detection with Adaptive Message Passing

GADAM integrates MLP-based local inconsistency mining with adaptive message passing for improved unsupervised graph anomaly detection, outperforming state-of-the-art methods.

Details



LLM

Talk like a Graph: Encoding Graphs for Large Language Models

It is the first comprehensive study of encoding graph data as text for large language models, boosting performance on graph reasoning tasks.

Details

Abstract: Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.

Label-free Node Classification on Graphs with Large Language Models (LLMs)

They introduce LLM-GNN pipeline for label-free node classification, leveraging LLMs for annotation and GNNs for predictions.

Details

Abstract: In recent years, there have been remarkable advancements in node classification achieved by Graph Neural Networks (GNNs). However, they necessitate abundant high-quality labels to ensure promising performance. In contrast, Large Language Models (LLMs) exhibit impressive zero-shot proficiency on text-attributed graphs. Yet, they face challenges in efficiently processing structural data and suffer from high inference costs. In light of these observations, this work introduces a label-free node classification on graphs with LLMs pipeline, LLM-GNN. It amalgamates the strengths of both GNNs and LLMs while mitigating their limitations. Specifically, LLMs are leveraged to annotate a small portion of nodes and then GNNs are trained on LLMs' annotations to make predictions for the remaining large portion of nodes. The implementation of LLM-GNN faces a unique challenge: how can we actively select nodes for LLMs to annotate and consequently enhance the GNN training? How can we leverage LLMs to obtain annotations of high quality, representativeness, and diversity, thereby enhancing GNN performance with less cost? To tackle this challenge, we develop an annotation quality heuristic and leverage the confidence scores derived from LLMs to advanced node selection. Comprehensive experimental results validate the effectiveness of LLM-GNN. In particular, LLM-GNN can achieve an accuracy of 74.9\% on a vast-scale dataset \products with a cost less than 1 dollar.



Angular Synchronization

Robust Angular Synchronization via Directed Graph Neural Networks

They propose a neural network framework with novel loss functions to tackle the angular synchronization problem and its extension to k-synchronization.

Details

Abstract: "The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles $\theta_1, \dots, \theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets $\theta_i-\theta_j$ mod $2\pi.$ Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed $k$-synchronization) is to estimate $k$ groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels."

Missing any paper? If any paper is absent from the list, please feel free to open an issue or submit a pull request. I'll gladly add that!


More Collectons:


Credits

Azmine Toushik Wasi

website linkedin kaggle google-scholar facebook