atomicarchitects / equiformer_v2

[ICLR 2024] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
https://arxiv.org/abs/2306.12059
MIT License
218 stars 28 forks source link

some questions #3

Open mlfffinder opened 1 year ago

mlfffinder commented 1 year ago

Hi , not issues but some questions.

  1. Any comparison of the performace to some other SOTA equvariant nets such as MACE or Nequip or sth ?

  2. Is MD simulation available ? or any developments going on?

Thanks for the nice work

yilunliao commented 1 year ago

Hello @mlfffinder

  1. No, but we have already discussed the differences between Equiformer(V1), EquiformerV2 and other works in our papers. We mainly benchmark on OC20 S2EF here since this is so far the largest dataset, there are node-wise labels and stronger models simply perform better. We note that on other smaller datasets, stronger models do not always perform better, but this is not an issue since we can pretrain on large datasets and then transfer (see limitations in EquiformerV2). Moreover, we also note that the OC20 dataset provides the same setting for training and testing while on other datasets, different works might use different training/testing splits and pre-/post-processing. The latter makes a fair comparison hard.

  2. No, MD simulation is not available here, but I think you can use Equiformer(V1) within other MD simulation codebase (after some modifications).

Best

mlfffinder commented 1 year ago

Got it. Thanks for the reply