atomicarchitects / equiformer_v2

[ICLR 2024] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
https://arxiv.org/abs/2306.12059
MIT License
218 stars 28 forks source link

Training logs of QM9 dataset for EquiformerV2 #11

Open fmocking opened 7 months ago

fmocking commented 7 months ago

Hello, thank you for sharing your great work!

I was wondering if it would be possible for you to share the training logs, similar to how the logs were provided for Equiformer. I'm specifically interested in seeing the higher precision test results. For example, for mu target 0, the reported results in the paper are limited to 0.11, but the Equiformer logs show higher precision results of 0.1172.

If you're able to share the logs, I would greatly appreciate it as it would allow for a more detailed analysis of the impressive results you achieved.

Thank you again for this excellent contribution to the field. I look forward to hearing back from you @yilunliao.

Best regards,

yilunliao commented 7 months ago

Hi @fmocking

Do you just need the results with higher precision? If yes, let me just provide a table for you here.

fmocking commented 7 months ago

Hi @yilunliao thank you for your prompt response.

Yes, that would be perfect!

fmocking commented 7 months ago

Hi @yilunliao, I was just checking in to see if you had some time to gather the high-precision results on the QM9 dataset. Thanks again.

yilunliao commented 7 months ago

Hi @fmocking

Thanks for reminding me and sorry for the late reply.

Here are the results I have: \mu \alpha \epsilon_{HOMO} \epsilon_{LUMO} \delta \epsilon R^2 ZPVE U_0 U H G C_{\nu}
EquiformerV2 0.00991 0.04712 14.43 13.34 29.03 0.18559 1.47 6.17 6.49 6.22 7.57 0.02296

For the task of \alpha, I found another training log that has slightly better results (50 (in the paper) -> 47 (in the table above)).

Some tasks have the same precision as in the paper. This is beause that is the best precision I have in the training log.

Let me know if you have any other question.