Closed ml-evs closed 1 year ago
@ml-evs - I think we addressed these comments in https://github.com/openjournals/joss-reviews/issues/5035 by mistake. Let us know if any of these issues still need to be addressed before publication.
Hi @ajmedford, thanks for pointing this out, and thanks to you and @nicoleyghu for the work implementing my suggestions. I've ticked off the things that are fixed above, and although I think some basic examples would be helpful (e.g., showing plots that describe the output of the sweep example), I'm satisfied that the examples cover your advanced features well.
I've just been going through my initial testing scripts and I've run into a couple of minor issues.
The default fp_scheme
seems to have changed since my last attempts, so the docs are now out of date here:
- "fp_scheme": str, # Fingerprinting scheme to feature dataset, "gaussian" or "gmp" (default: "gaussian")
+ "fp_scheme": str, # Fingerprinting scheme to feature dataset, "gaussian", "gmp" or "gmpordernorm" (default: "gmpordernorm")
My old training script didn't set fp_scheme
and now fails with the error
File "/home/mevans/src/amptorch/amptorch/dataset.py", line 49, in __init__
self.descriptor = construct_descriptor(descriptor_setup)
File "/home/mevans/src/amptorch/amptorch/dataset.py", line 131, in construct_descriptor
descriptor = GMPOrderNorm(MCSHs=fp_params, elements=elements)
File "/home/mevans/src/amptorch/amptorch/descriptor/GMPOrderNorm/__init__.py", line 35, in __init__
self.default_cutoff()
File "/home/mevans/src/amptorch/amptorch/descriptor/GMPOrderNorm/__init__.py", line 59, in default_cutoff
sigmas = self.MCSHs["MCSHs"]["sigmas"]
There is of course no requirement that your API stays fixed during review, but just thought I'd mention it in case it was unintentional! Setting fp_scheme
to gaussian
allows me to train potentials once again.
@ml-evs Thanks for these suggestions and pointing out the conflicts in the documentation. I think everything is now taken care of, except for the config data class, which is a work in progress but will hopefully be finished soon. I have opened a separate issue for that here: https://github.com/ulissigroup/amptorch/issues/123, and, for logistical reasons, I'm hoping we can close this issue and finalize the paper while we finish this up.
Regarding the other suggestions:
Let us know if you would like to see any additional changes before getting this officially published.
Thanks @ajmedford and @nicoleyghu for the hard work on this, I think they really help (my own) understanding of the code -- I hope it wasn't too arduous. I'm happy to close this issue and will give my recommendation over in the JOSS issue.
Thanks @ml-evs for all the detailed feedback! While the changes were a lot of work, I think they made the package much stronger, and were worth the effort. We also appreciate your patience as we navigated this process and learned a lot about open-source software development along the way.
Hi @ajmedford and other authors! Apologies for the slightly slow uptake on my review (https://github.com/openjournals/joss-reviews/issues/5035) -- I have now been able to install the package and try the examples and have enough to give a first round of feedback.
config
passed toAtomsTrainer
used in many of the examples could be wrapped into a class (perhaps a dataclass). The structure of the class is basically already provided in https://amptorch.readthedocs.io/en/latest/usage.html but it is currently very easy to make a mistake or get lost amongst default values. This should also aid development going forwards.Path(__file__).parent / "pseudodensity_psp"
.consistency_test.py
fails, as well as several intest_script.py
, in both CPU and GPU mode). Could you perhaps provide some documentation for executing the tests, in case it is my set up that is wrong? (test failures hidden below)