deepmodeling / dpgen2

2nd generation of the Deep Potential GENerator
https://docs.deepmodeling.com/projects/dpgen2/
GNU Lesser General Public License v3.0
33 stars 26 forks source link

Request for new features: stage-specific parameters for explore and fp, finetune/iter-initial-modle distinguishing training parameters #216

Open Vibsteamer opened 6 months ago

Vibsteamer commented 6 months ago

REQUEST1 :

expect the following parameters to have further structures to support exploration-stage specific assignment :

  1. explore/convergence (all paramenters within)
  2. explore/max_numb_iter
  3. explore/fatal_at_max
  4. fp/task_max

e.g.,

REQUEST 2

expect to support different ending_pref_e/f/v for initial_finetune from multi-task pre-train models and the successive init_model form the finetuned_initial_model.

Currently train/config supports only start but no end parameters, like only "init_model_start_pref_e" but no "init_model_end_pref_e". Instead, the end_prefs are inherited from the limit_prefs from one training scripr defined intrain/config/templated_script

Maybe need to support two scripts as by train/config/templated_script, or adding new init_model_end_pref_e/f/v parameters in train/config

scenario arising REQUEST 1

In practice of the pre-train models initiated DP-GEN2, multiple successive exploration stages are used to ehance the exploration efficiency on a complex sample space.

The sample sapce consists of derivatives from (1) many severely different initial configurations, (2) both trivial dynamics images and significant low-probability instances, and (3) successors after low-probability instances which is also trivial but as well severely different compared with their initial/parent configurations.

(1) will suffer from the species bias after pre-train (and finetune), then leads to over-sampling on full trajectories of specific far-away-from-pretrain configurations (2) is our central target (3) will suffer from the conformer bias after pre-train (and finetune), then leads to over-sampling on these trivial successors configurations

Thus stage_0 and stage_1 are used to debiasing (1) and (3) through randomly select candidates from a broader model_devi range. No final exploration convergence is expected for these two stages. stage_2 is the actually meant to be converged one for *2), and related parameters would be different from debiasing stages.

scenario arising REQUEST 2

tests showed different parameter preferences for trainings in two stages.

Vibsteamer commented 6 months ago

BTW, due to some compatibility limitation, I'm using this branch https://github.com/zjgemi/dpgen2/tree/deepmd-pytorch from and thanks to @zjgemi