distillpub / post--gnn-intro

Apache License 2.0
93 stars 28 forks source link

Peer Review #3 #3

Open distillpub-reviewers opened 3 years ago

distillpub-reviewers commented 3 years ago

The following peer review was solicited as part of the Distill review process.

The reviewer chose to waive anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service they offer to the community.

Distill is grateful to Humza Iqbal for taking the time to review this article.


General Comments

Highly enjoyed the article! It was a great look into GNNs and various aspects of them as well as the problems they are used in. My favorite part was how thorough the article was in exploring the mechanics; diving into various aspects such as different pooling functions, how to batch them and so on. The diagrams were very fun to play around with, being able to manipulate the graphs and understand how they were effected by changing the different building blocks was very easy to see.

One thing that may be nice to add or at reference is this article which talks about the equivalence between Transformers and GNNs https://graphdeeplearning.github.io/post/transformers-are-gnns/. I thought about this when Transformers were mentioned in the article "This refers to the way text is represented in RNNs; other models, such as Transformers". I think an aside could be added in the section where Graph Attention Networks are mentioned.

It may also be good to point out that there is research going on in message passing to find the optimal way to get information to flow through. As an example, this paper https://arxiv.org/abs/2009.03717 deals with the issue of encoding global information well. On that note, it might be good to add a sentence talking about the limitations of message passing (ie if I increase my window size too much I risk my node representations converging and losing my ability to update)


Distill employs a reviewer worksheet as a help for reviewers.

The first three parts of this worksheet ask reviewers to rate a submission along certain dimensions on a scale from 1 to 5. While the scale meaning is consistently "higher is better", please read the explanations for our expectations for each score—we do not expect even exceptionally good papers to receive a perfect score in every category, and expect most papers to be around a 3 in most categories.

Any concerns or conflicts of interest that you are aware of?: No known conflicts of interest What type of contributions does this article make?: Exposition on an emerging research direction

Advancing the Dialogue Score
How significant are these contributions? 4/5
Outstanding Communication Score
Article Structure 5/5
Writing Style 4/5
Diagram & Interface Style 4/5
Impact of diagrams / interfaces / tools for thought? 4/5
Readability 4/5

Comments on Readability

The diagrams were overall quite good. One nitpick I have is that for the diagram showing the difference between max, sum, and mean pooling it might be better to write "No pooling type can always distinguish between graph pairs such as max pooling on the left and sum / mean pooling on the right".

Some minor grammatical nitpicks:

  1. in the section on Graph Attention Networks the Latex doesn't seem formatted quite right for the phrase "( f(node_i, node_j))" perhaps there was some slight Latex error?

  2. the phrase "design design" appears in the section on "Learning Edge Representations" when I believe "design decision" was meant.

  3. In the section "GNN Playground" I believe ‘allyl alcohol’ and ‘depth’ were meant to be italicized

Scientific Correctness & Integrity Score
Are claims in the article well supported? 4/5
Does the article critically evaluate its limitations? How easily would a lay person understand them? 4/5
How easy would it be to replicate (or falsify) the results? 4/5
Does the article cite relevant work? 4/5
Does the article exhibit strong intellectual honesty and scientific hygiene? 3/5

Comments on Scientific Integrity

The article talks about the limitations involved in setting up GNNs and working with them (such as the tradeoffs between different aggregation functions) however it would have been nice to see some notes on how well GNNs work on various problems such as generative modeling or interpretability. I put the overall score for the limitations category at a 4 however if I were to break limitations down into how well particular limitations were explained and overall limitation coverage I would give each a score of 4 and 3 respectively.

beangoben commented 3 years ago

One thing that may be nice to add or at reference is this article which talks about the equivalence between Transformers and GNNs https://graphdeeplearning.github.io/post/transformers-are-gnns/. I thought about this when Transformers were mentioned in the article "This refers to the way text is represented in RNNs; other models, such as Transformers". I think an aside could be added in the section where Graph Attention Networks are mentioned.

We agreed and expanded this connection in the Graph Attention Networks subsection.

It may also be good to point out that there is research going on in message passing to find the optimal way to get information to flow through. As an example, this paper https://arxiv.org/abs/2009.03717 deals with the issue of encoding global information well. On that note, it might be good to add a sentence talking about the limitations of message passing (ie if I increase my window size too much I risk my node representations converging and losing my ability to update)

We agreed and expanded with a sub section "Some frontiers (and limitations) with GNNs" in the "Into the Wilds" section.

beangoben commented 3 years ago

We thank the reviewer for their time and attention, we have taken their comments into consideration and we think our work is stronger because of them.

Next, we summarize most of the changes that we have made based on feedback from all reviewers:

Reviewer 1 made several points on improving the writing and presentation of ideas, this resulted in simplifying the language for several sentences, breaking down paragraphs and expanding examples for some concepts.

Reviewer 1 also asked us to improve on the "lessons" of the GNN playground. These lessons became the subsection ""Some empirical GNN design lessons" which details new interactive visualizations that show some of the larger architecture trends for the playground.

Reviewer 3 made a point about expanding on the connection between Transformers and also on some of the current limitations with GNNs and message passing frameworks.

All reviewers noted a few typos, latex equations errors and grammatical mistakes that we have fixed. The bibliography has expanded slightly.

For a more detailed breakdown of the changes: