coarse-graining / cgnet

learning coarse-grained force fields
BSD 3-Clause "New" or "Revised" License
57 stars 26 forks source link

suggestions to #158 #159

Closed brookehus closed 4 years ago

brookehus commented 4 years ago

This PR makes several changes, some relevant to #158 and some not.

These ones are not related:

  1. Force n_beads input in SchnetFeature. The default is None, but I can't imagine a situation where we want this argument, so I have forced it to be provided by the user.

  2. Change beadwise_batchnorm to be a bool (in SchnetFeature only). It didn't make sense to me that we were twice feeding it an n_beads argument. Unless there is a reason that we might be normalizing by something other than n_beads, I made beadwise_batchnorm in SchnetFeature take True or False (default False). If False, it passes None down the line and if True it passes n_beads down. I thus removed the checker import from feature.py since it's only needed in schnet_utils.py now, and modified the relevant tests. Of course, we are generally NOT checking that people have adhered to the types specified in the documentation, so there may still be errors, but this is certainly not unique to this case.

  3. Allow the passing of track_running_stats to BatchNorm1d through the argument batchnorm_running_stats.

This one is related:

  1. Changes to dataset_loss. A) I switched model_mode which was allowed to be train or eval to train_mode which is a bool. To me, this makes a lot more sense than opening up a string argument and having to catch typo errors. If there are only two options, why not just use a bool? B) I moved up the train_mode argument earlier in the argument list. We don't always need to make new arguments at the end - "more important" ones go ahead. In our tests we should always be specifying arguments explicitly so this shouldn't cause any issues in theory (although I don't think we're 100% good about this). C) The current default settings have optimizer=None and train_mode=True. This is going to raise an error by default since the user must either give an optimizer, or specify train_mode=False. I think this is going to be helpful in avoiding mistakes, since there will be no way to use dataset_loss "out of the box". D) Therefore there is no need to check for typos in the argument.

A few general notes:

  1. IN GENERAL we are trusting the user to supply arguments of the type listed in the documentation and we are not anticipating typos. In some especially important cases we can opt to check these things but in general I do not think it is a good use of our time to find every case where there might be a typo.

A better way to do what you did would be to check it within the function, e.g.

if 'train_mode' not in ['train', 'eval'] ...

than anticipating typos. But if there are only two options, we should use a bool.

  1. When you run autopep8 PLEASE only use it to change parts of the code that you are editing. PRs with a bunch of irrelevant autopep8 changes are frustrating to review. If you want to change unrelated stuff just make it a separate PR.
nec4 commented 4 years ago

Taking a look now!

nec4 commented 4 years ago

I agree with all of the changes - LGTM!