leeping / geomeTRIC

Geometry optimization code that includes the TRIC coordinate system
https://geometric.readthedocs.io/
Other
148 stars 65 forks source link

Ideas for finite_difference_grad.py #182

Open annulen opened 8 months ago

annulen commented 8 months ago

@@ -145,7 +155,7 @@ def parse_fd_args(*args):

def main(): args = parse_fd_args(sys.argv[1:])

On the other hand, I have experience with another module for handling command line arguments: absl.flags. It allows to define flags right in the modules where they are being used, and any script that uses those modules directly or indirectly will automatically be able to parse their flags from argv. Downside is an extra dependencies and a bit less user-friendly --help. I can explain more about it if you are interested.

leeping commented 8 months ago

Thanks for the suggestions. I agree with many of the suggestions you made for finite_difference_grad.py, ideally it would share many of the command line arguments as the other "programs". A progress bar would be great (I never did get this to work). The embarrassingly parallel mode of finite_difference_grad.py (as well as the normal mode calculation in the optimizations) is currently handled using Work Queue. It creates a dependency but also enables one to run the gradient jobs on different physical machines.

Yes, I think it would be a good idea to use the optimization step as a finite difference step and compare it with the projection of the gradient in that direction. For large steps, a significant disagreement can be expected, but one should expect the agreement to improve as the steps become smaller assuming the gradient and energy are consistent. This could be done as part of the geometry optimization loop so that the user can be warned when there's energy/gradient inconsistency. I don't think additional steps to improve the numerical gradient quality are necessary, but it could be nice if implemented cleanly.

In fact it may be possible to use the energy change to "correct" the quantum chemical gradient, similar to how one updates the Hessian using BFGS, but I think that is a new research project.

annulen commented 8 months ago

The embarrassingly parallel mode of finite_difference_grad.py (as well as the normal mode calculation in the optimizations) is currently handled using Work Queue. It creates a dependency but also enables one to run the gradient jobs on different physical machines.

Does it allow to run N jobs on the same machine? I only have one for now :)

leeping commented 8 months ago

Yes. You simply run the finite_difference_grad.py and multiple copies of work_queue_worker on the same machine.

annulen commented 8 months ago

Another practical consideration: I would like to evaluate quality of gradients on 9-molecule cluster which we've discussed in another issue, however that system contains 180 atoms so it would require huge amount of resources to compute. However, evaluation of 3 points for a single step vector could be done quickly.