choderalab / espaloma

Extensible Surrogate Potential of Ab initio Learned and Optimized by Message-passing Algorithm 🍹https://arxiv.org/abs/2010.01196
https://docs.espaloma.org/en/latest/
MIT License
211 stars 25 forks source link

restructure #9

Open yuanqing-wang opened 4 years ago

yuanqing-wang commented 4 years ago

I think it's time, for the sake of scaling this up to more systematic experiments, to restructure this project.

The primary challenge I realized when designing experiments and structuring the current project is that, there are too many parts in this streamline that we want to model. Specifically:

There are problems with each of these objects, and we can argue that there are still downstream objects that we can have, namely those needed for openmm simulations.

But first, here's how I picture to structure the repo:

The following functions are needed to allow the aforementioned streamline in a cleaner manner:

The following are the trickiest choices that I would like to have a discussion here:

maxentile commented 4 years ago

Nice, thanks for these thoughtful observations and proposals, @yuanqing-wang ! I mulled this over yesterday afternoon and this morning, and largely agree with this. We want to separate concerns in a way that lets us do the comparisons we plan as painlessly as possible, and it's tricky because we want to make comparisons at a few different levels.

I would draw the lines between the tasks slightly differently, and in particular I would emphasize the distinctions between the molecule graph, the graph you perform message-passing on, and the factor graph representing the potential energy function.

I would start by stating the overall task in a way that doesn't commit to a specific way to solve it.

f: molecule --> potential_energy_fxn

where f is purely a function of the molecular graph, and may incorporate any of the choices we want to compare in this study ("reference MM force field", "fingerprints and random forests", "graph-nets", ...), and potential_energy_fxn : coordinates --> energy is a function purely of the geometry (but which may have internal attributes we want to inspect or export).

To implement f using a graph-net, you may further split this up: f(molecule) = read_out_net(graph_net(derive_graph(molecule))), where:

To implement f using the other approaches that we need for control experiments (e.g., to assess how much mileage we're getting from more or less elaborate choices of derive_graph or graph_net, you may split it up differently, e.g. f(molecule) = readout_net(fingerprint(molecule)), where

For different tasks, we may be interested in doing different things with the output potential_energy_fxn.

yuanqing-wang commented 4 years ago

Thanks for fleshing this out! @maxentile I think it's a lot clearer to further distinct the graphs into these stages.

I would suggest that we look more closely at potential_energy_fn. We might benefit from yet another intermediate layer named something like parametrized_graph. I think it's not super straightforward how to jump directly from readout to a function object that takes in coordinates as input and outputs energy.

Moreover, we could compare parametrized_graph objects between methods. Also it would make it a lot easier to batch etc.

maxentile commented 4 years ago

I would suggest that we look more closely at potential_energy_fn. We might benefit from yet another intermediate layer named something like parametrized_graph. I think it's not super straightforward how to jump directly from readout to a function object that takes in coordinates as input and outputs energy.

Agreed -- there's a lot going on at that step!

Perhaps we can narrow down by listing the types of comparisons involving this step that we need to support, and the types of comparisons we may eventually want to support.

Required comparisons (based on current version of paper outline):

To support this, maybe we have two subclasses of PotentialEnergy -- PotentialEnergyClass1, PotentialEnergyClass2 -- which contain differently structured factor graphs as instance attributes. Perhaps these factor graphs are initialized from a molecule to have all possible terms present but with default values for each term's parameters (e.g. force constants of 0 for all harmonic_*_force factors). A learned parameter-assignment function overwrites some of these parameters. These classes should support both autodiff-friendly computation of energies / forces on batches of geometries, as well as export to an OpenMM-friendly representation.

Speculative comparisons:

yuanqing-wang commented 4 years ago

Rather than hard-code Class I and Class II, how about having a ParametrizedGraph object that can host both. And supply term in I and II as methods of the object. We can of course have presets as flags saying something like,

if self.terms == 'class-i':
    self.terms = [getattr(self, x) for x in ['bond', 'angle', 'non_bonded']
yuanqing-wang commented 4 years ago

Learned atom representations vs. learned factor representations.

I agree that we should definitely compare this. But just in terms of how to engineer it: the ParametrizedGraph object resulted from message-passing step should be parameters on the factor graph anyway. The difference you're mentioning, if I understood correctly, is just the architecture choice of parametrizing factor graphs, be it pooling after MP, or hierarchical MP or something else.

maxentile commented 4 years ago

Rather than hard-code Class I and Class II, how about having a ParametrizedGraph object that can host both. And supply term in I and II as methods of the object. We can of course have presets as flags saying something like,

if self.terms == 'class-i': self.terms = [getattr(self, x) for x in ['bond', 'angle', 'non_bonded']

Both options are basically the same for a single binary comparison.

In one case, there's a single class ParameterizedGraph, whose method definitions etc. each contain two branches, selected based on whether the object has a flag or not. In the other case there are two sub-classes of ParameterizedGraph, whose method definitions etc. contain one or the other branch.

I prefer to have two sub-classes, rather than having if-else branches that check a string flag to find out what kind of object the ParameterizedGraph really is. I think it doesn't make too much difference if there are only two variants of ParameterizedGraph we'll need to consider in the lifespan of the code.

However, if we're entertaining the possibility of more than two variants of ParameterizedGraph (e.g. allowing different parameterization of each factor), I think it will become increasingly unwieldy to have if/else branches to tell which parts of which method definitions apply based on which flags are present.

In general, when we find ourselves branching on some string that says what class an object is, that's often a hint that we'd be better off with sub-classes. (This page has a helpful comparison of these two options: https://refactoring.guru/replace-conditional-with-polymorphism )