duartegroup / autodE

automated reaction profile generation
https://duartegroup.github.io/autodE/
MIT License
173 stars 52 forks source link

DIC iterated back-transform possible bug #257

Closed shoubhikraj closed 1 year ago

shoubhikraj commented 1 year ago

I am not sure if this is a bug, but when updating delocalised internal coordinates with addition, I think there is a bug in these lines:

https://github.com/duartegroup/autodE/blob/161818ce505b443c10ce8ecfde78c863007aedd6/autode/opt/coordinates/dic.py#L235-L247

If it fails to transform, it sets the cartesians to the first guess of Cartesian defined here:

https://github.com/duartegroup/autodE/blob/161818ce505b443c10ce8ecfde78c863007aedd6/autode/opt/coordinates/dic.py#L215

I think it should be using the x_k from the final iteration and not x_1.

Also, the if self._allow_unconverged_back_transform: line is not doing anything as the loop will always exit there, so break is not changing anything.

I think the code should be something like this:

     if i == _max_back_transform_iterations: 
         logger.warning( 
             f"Failed to transform in {i} cycles. " 
             f"Final RMS(s) = {rms_s:.8f}" 
         ) 
         if self._allow_unconverged_back_transform: 
             break
         else:
             x_k = x_1 
t-young31 commented 1 year ago

thanks @shoubhikraj

Looks like _allow_unconverged_back_transform does indeed to absolutely nothing! probably missed a '#TODO' there. Does everything work okay if you remove that property?

I think it should be using the x_k from the final iteration and not x_1.

x_k can be complete garbage so definitely don't want to use that

shoubhikraj commented 1 year ago

@t-young31 Hmm, I haven't tried removing the property, I will try.

x_k can be complete garbage so definitely don't want to use that

I thought that maybe that's why you put the property there so that someone could turn it on, and get the unconverged result x_k, instead of x_1?

Also, doesn't the RMS of transform go down in every iteration? So that x_k is always better than x_1?

t-young31 commented 1 year ago

I thought that maybe that's why you put the property there so that someone could turn it on, and get the unconverged result x_k, instead of x_1?

I believe my intention was to raise an exception if the back transformation doesn't converge and _allow_unconverged_back_transform = False 😅

Also, doesn't the RMS of transform go down in every iteration? So that x_k is always better than x_1?

Sadly not in all cases. If it does go down then it converges but if it doesn't then it flys off to infinity