Closed PatReis closed 2 years ago
@hrushikesh-s
@PatReis thanks again for the submission! It is much appreciated! Also, I have to say the code implementation you provide with the kcgnn package is very clean and interpretable. Under the "algorithm_long"
key, would you mind putting some more details about the training and/or hyperparameters (and how they were selected)? Even if it is just "Default hyperparameters were used according to original publication (n_layers, etc.). It just makes it easier to read on first inspection without having to dig through the reported hyperparameters in dict format.
@hrushikesh-s when you review this PR, could you load the benchmark into an object and compare the numbers with what is reported in our original megnet publication, and see if there are any large discrepancies (and potentially investigate why)?
The way you would do this is as follows:
main
branchMatbenchBenchmark.from_file(...)
scores
attribute for each taskSure, I will do.
Is this better?
Is this better?
Yes that looks good to me, but for clarification what are the "QM runs"?
Ah, sorry, I updated it. I just meant from training on QM9 (QM7) datasets, which is usually given in the papers. So the hyperparameter with which the results on QM9 can be reproduced.
Ah, sorry, I updated it. I just meant from training on QM9 (QM7) datasets, which is usually given in the papers. So the hyperparameter with which the results on QM9 can be reproduced.
Oh great! Yeah, if you submit a new one with more optimized hyper parameters that would be very interesting to compare to this. Though looking at the results here they are very good and line up quite well with the original paper. It is still very interesting to me that MEGNet does so well on the phonon DOS problem
I'll leave it to @hrushikesh-s to merge this in when he sees fit
Matbench Pull Request
Add MegNet benchmark from "Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals" by Chi Chen.