markovmodel / ivampnets

24 stars 2 forks source link

Sometimes fail to estimate real values of implied timescale of SynaptotagminC2A via iVAMPnet #6

Closed Toolxx closed 10 months ago

Toolxx commented 1 year ago

Hi. Since I found some of the modeling result of my system will fail to estimate the implied timescale by chance, I back to the demo code with SynaptotagminC2A. But when I ran this example, I also found that sometimes it failed to estimate some of the implied timescale values. image

I would like to ask some questions about it. From my understanding from literatures about the VAMPnet/MSM construction, it may happened when the data is insufficient to estimate the model. However, in your example SynaptotagminC2A, you successfully modeled iVAMPnet and the 92 different 1 microsecond simulations about SynaptotagminC2A are relatively adequate data amounts. I thought the model resolves processes should easily connect by fitted iVAMPnet. Besides, from a practical point of view, I have run the demo code several times and found that when the "trace all" metric is high, there are fewer cases where the implied timescale cannot be estimated.

In my own research on systems of interest (with 1 10μs simulation and 9 1μs simulations), I have encountered a similar situation as above. Or, is this simply the result of uncertainty in the numerical optimization (i.e., model estimation/training) of the NN?

Thanks for any helps.

amardt commented 1 year ago

Hi,

Cool that you are really making your own observations about the behavior. It helps a lot for giving advice! So here comes my hypothesis: So based on your observations, the trace_all implies that the state definitions/probabilities will cover a larger range. You can also check these. Sometimes they only differ slightly, but optimally they should cover a range [0,1]. This usually is an issue when training Vampnets and training for the trace should mitigate that. The small differences in state definitions will result in numerical instabilities when inverting the matrix. Therefore, the estimates of the ITS might fail. I would suggest you increase the training time when also the trace is maximized. Or you turn it on again at a later training stage. Hope this helps to fix your problems.

Toolxx commented 1 year ago

Hi Dr. Mardt,

I have actually noticed the "trace_all" criteria, it do helps for ITS estimation. However, another problem this brings up at the same time is that the "VALIDATION SCORE" is decreasing as the training time grows (as the figure). The training score is approximate to the theorical maximun of VAMP-E but the validation score just go down. From some of my basic practical experience with DL, this seems to be overfitting due to long training times? How do I go about understanding this or avoiding it since the overfitting problem leads the result less truthworthy? hERG_atomistic_train_vs_val_score_run3

I have another question I'd like to ask since you mentioned "The small differences in state definitions will result in numerical instabilities when inverting the matrix.". In my study of the system I'm interested in (a relatively large ion channel protein compared to the SynaptotagminC2A), the models obtained from multiple runs of iVAMPnet will be much more different. Specifically, in the model construct by different runs, the way to construct the Markov dynamical state is different (i.e. different underlying biophysical meaning). Even the division of the independent Markov domain have some differences for different runs. If we just look at the VAMP-E scores, all of these models do a good job of modeling this dynamical system. However, the different models seem to present less consistence results, how am I going to understand this? Can you provide some suggestions? Thanks for any helps.

amardt commented 1 year ago

Hi, so first of all your understanding of overfitting is correct. The plot seems to look this. At the very end of you training I would suggest to turn of the trace_all option, since finally you only want to optimize the true score. In any case, I would suggest you save the best model according to the validation score and use that as the final model.

So I must admit that the method is not optimal in the sense that it depends on the initialization of the weights. At the beginning there already exist some form of random state assignment. The model then optimizes from there to the closest "optimal" solution (local optimum). However, sometimes for finding the global optimal solution it would need to first optimize through a region which is worse than the current assignment. This being said, I think first you should try to increase the time lag if possible. Thereby, the VAMP score gets more pronouced. For smaller tau values, the timescales are all very close to each other, especially when the model is still far off from the converged value. So for iVAMPnets, after comparing the VAMPe score you should then next look at the different independence scores as a next measure which model is the better. Finally, in the end you need to find the two hyperparameters number of subsystems and states per subsystem. I would suggest to start with all subsystems have 2 states and then incrementally increasing the number of subsystems and monitor the independence scores. When you realize that the independence scores increases if you take more subsystems, then you found the optimal number of subsystems. I would suggest you then check whether it consistently finds the same subsystems. Next you can increase the number of states per system. From there on it will get more dependent on the initialization, which makes it tricker. Perhaps you can keep me posted how this works out.

All the best