fani-lab / SEERa

A framework to predict the future user communities in a text streaming social network based on the users’ topics of interest.
Other
4 stars 5 forks source link

KDD2022.ROLAND: Graph Learning Framework for Dynamic Graphs #69

Open soroush-ziaeinejad opened 1 year ago

soroush-ziaeinejad commented 1 year ago

Why did I choose this paper? This paper is one of the most related papers to our work (SEERa) especially to the Graph Embedding Layer - We can use this framework to dynamically model our user similarity graphs to a temporal representation - It is written by well-known experts in this area - Nice narration and write-up.

Main problem:

ROLAND is a graph learning framework for dynamic graphs. Dynamic graphs are graphs that change over time, such as social networks, transportation networks, and communication networks. These graphs are often large, complex, and difficult to model. The goal of ROLAND is to provide a flexible and efficient framework for learning the structure and dynamics of dynamic graphs. Main insight: Extending any static graph representations for dynamic data.

Applications:

Fraud detection, anti-money laundering, recommender systems.

Existing work:

Most of the common graph representation methods are designed for static graphs. There are multiple research work on dynamic graph representation, however, there are a few works that leverage the power of successful static graph representation approaches to model the dynamic or evolving data.

Main limitations which ROLAND addresses:

  1. model design - no incorporation from static GNN designs (skip-connections, batch normalization, edge embedding) = not adapting successful static GNNs for dynamic data
  2. evaluation settings - ignoring the evolving nature of data and models, not updating model for new data
  3. training strategies - keeping entire or huge amount of data in memory (or GPU)

Inputs:

Three different steps (algorithms):

  1. GNN forward computation: graph snapshots. g1, g2, ..., gT + hidden state of layer T-1 (H_T-1)
  2. Live Update Evaluation: graph snapshots. g1, g2, ..., gT + y1, y2, ..., yT + trained GNN
  3. Training Algorithm: g1, ..., gT + y1, ..., yT + H_T-1 + smoothing factor + a model for fine-tuning (meta-model)

Outputs:

Three different steps (algorithms):

  1. GNN forward computation: yT = the probability of future edges. Updated HT
  2. Live Update Evaluation: Performance MRR + Trained GNN model
  3. Training Algorithm: GNN model + updated meta model

Method:

ROLAND consists of several components, including a dynamic graph representation, graph evolution model, graph learning algorithm, and evaluation framework. The dynamic graph representation is used to represent the graph at different time steps, while the graph evolution model is used to model the dynamics of the graph. The graph learning algorithm is used to infer the structure and dynamics of the graph from the data, and the evaluation framework is used to evaluate the performance of the algorithm. Three major components are: 1- Model Design: static GNN for dynamic graphs. 2- Training: high scalability. 3- Evaluation: capture evolving nature of dynamic data with hierarchically updating node embeddings.

Gaps:

Experimental Setup:

Baselines:

Results:

Code:

https://github.com/snap-stanford/roland

Presentation:

There is no available presentation for this paper.