deepmodeling / DMFF

DMFF (Differentiable Molecular Force Field) is a Jax-based python package that provides a full differentiable implementation of molecular force field models.
GNU Lesser General Public License v3.0
152 stars 43 forks source link

Question about gradient #113

Open faranak1991 opened 1 year ago

faranak1991 commented 1 year ago

Summary

Hello, I'm quite interested by using DMFF, but I find myself somewhat puzzled, and I'd greatly appreciate your assistance in clarifying the theoretical aspects. Specifically, I'd like to understand the process of introducing minor perturbations to the FF parameters in each iteration, followed by the use of MBAR to estimate properties based on these modified parameters, and ultimately, the computation of gradients with respect to each parameter. In essence, does MBAR have the capability to predict the energy and other system properties when presented with a new set of force field parameters? Any help would be appreciated (@WangXinyan940).

Motivation

using DMFF

Suggested Solutions

No response

Further Information, Files, and Links

No response

WangXinyan940 commented 1 year ago

Yes, MBAR can estimate the property change with new parameters. In fact, when your reference data is sampled from only one potential function, MBAR estimator would work as Zwanzig reweighting. When your reference data is from multiple potential functions ( or potential function with different FF parameters), MBAR would work as the original paper said.

This page may help: https://pymbar.readthedocs.io/en/master/mbar.html#pymbar.MBAR.compute_expectations

faranak1991 commented 1 year ago

@WangXinyan940 . I appreciate your answer. I've recently visited the "pymbar" page, but I'm still somewhat puzzled about how MBAR handles the new force field parameters that haven't undergone any MD simulations for energy calculation. Can you please clarify how MBAR reads and compares these new parameters with the original ones?

KuangYu commented 1 year ago

MBAR itself is merely a way to evaluate ensemble averages and free energy of a "target state", through samples from other states (it is recommended to read the original MBAR paper first to get the gist of the method: J. Chem. Phys. 136, 144102). For example, let us say that the target state you are interested in is defined by a set of parameters p0, and the samples are collected from the MD simulations in different "sampling states", defined by other parameters p1, p2, etc. So you do not need to run MD simulations in p_0, but can estimate the ensemble average corresponding to p0 by reweighting the MD samples from p1/p2/... Of course the estimate of the ensemble average A is a function of p0: A = A(p0).

All DMFF does is to make this estimate differentiable: while keeping the sampling states (p_1/p_2 ...) fixed, conceptually you can change the value of p0 slightly, and DMFF would give you how the ensemble average changes with respect to p0 at the point of p0: d A(p0) / d p0, without running MD simulations in p0+\delta. This gradient informs you how to update p0 in the right direction, so you may update it (how exactly the update is done depends on the optimization algorithm you are using). Such process is repeated until convergence is met. Occasionally, you will need to update your sampling states (p1/p2) too, when they stop overlapping with p0, which is called a "resampling".

So in short, DMFF does not read in new p and run MD and compare it with the results of old p. Instead, DMFF run MD in p, and use MBAR to evaluate how ensemble averages differentiate with respect to p. I hope my explanation may help you. But I guess it is the best to read the original MBAR paper, and our DMFF paper (J. Chem. Theory Comput. 2023, 19, 17, 5897–5909) first.

faranak1991 commented 1 year ago

Dear @KuangYu, I appreciate your thorough response. Upon reviewing your DMFF paper, I've noticed that in the workflow depicted in Figure 2, the modified parameters are returned to MBAR as the new unsampled state. My inquiry pertains to which aspect of this new state MBAR is examining. Specifically, I'm curious whether MBAR is analyzing the probability distribution of these newly updated parameters or if it's directly assessing the updated parameters themselves.

KuangYu commented 1 year ago

What you are asking is the the process of "resample": that is, we use the new parameters to generate a new set of samples, which are then used for the gradient evaluation of the following optimizatino steps. The logic of resample is as following:

  1. I am not sure what do you mean by "analyzing". But in principle, MBAR takes a set of samples, and the the parameters used to generate the samples (say p0) as input. Then differentiable MBAR can figure out dA/dp at any parameter p, in principle. Then the optimizer, taking dA/dp from differentiable MBAR, determines how p changes during optimization. (MBAR does not assess updated p, it merely evaluates the gradient of p, and feed it to optimizer, let optimizer to update p according to gradient)
  2. In principle, point 1 can be done for any p0 and p, as long as you have infinite samples. However, in reality, you do not. In reality, due to limited sample sizes, you have statistical noise for the evaluation of dA/dp, and this noise is larger when p0 and p are more different. Occasionally, when p0 and p are so different, the MBAR estimate of dA/dp become totally nonsense and optimization cannot continue.
  3. To avoid the difficulty in the last point, occasionally, when we find p0 and p are too different, we use current p to replace p0, and to generate a new sample set using the new p0 (i.e., the current p), which is called a "resample". After resampling, you p0 is now the same as p, then the evaluation of dA/dp becomes reliable again. Then with the new p0, you continue to do what you do in point 1, i.e., updating p, until p drifts far away from p0 again, that is when next resample happens. Note that resample does not happen in every step, it only happens when p0 and p are too different.

So, to answer your question: "whether MBAR is analyzing the probability distribution of these newly updated parameters or if it's directly assessing the updated parameters themselves." MBAR is doing neither. MBAR is not analyzing the probability distribution of p in each step, and it is not updating p either. MBAR is always evaluating dA/dp using samples from p0, it simply needs to adjust p0 once for a while, to make sure the statistical noise in dA/dp evaluation is small enough.

faranak1991 commented 1 year ago

@KuangYu. To clarify, when you mention "resample," you are referring to conducting a MD simulation using the updated parameters. This simulation is necessary to generate the reduced potential energy required for MBAR. So, in essence, an MD run is still a prerequisite in this context. Is my understanding accurate?

KuangYu commented 1 year ago

“when you mention "resample," you are referring to conducting a MD simulation using the updated parameters.” - Yes

"This simulation is necessary to generate the reduced potential energy required for MBAR." - I am not sure what you mean by "reduced potential energy" here....

"in essence, an MD run is still a prerequisite in this context. " - Yes you need a MD run to start with, and a MD run once in a few steps to update your samples. But it is usually not necessary for each parameter update.