Closed bosonie closed 3 years ago
This PR fixes #82 in the easiest way possible, however there is a lot of space for improvements. For instance we could return in output some info on the failed convergences, adding the parameters that failed to converge in a output list or something similar.
Moreover, there is the caveat of what to use in the next convergence test when the previous tested parameter failed to converged. As I wrote in the documentation, now the quantity passed in converger_inputs
is used, not the last tested parameter. Is this what we want? I would say yes, but maybe a more exhaustive warning should be produced.
Thanks for finding these two bugs!
Moreover, there is the caveat of what to use in the next convergence test when the previous tested parameter failed to converged. As I wrote in the documentation, now the quantity passed in converger_inputs is used, not the last tested parameter.
Are you sure about this? Isn't the input from the last sequential step reused? Well maybe we are saying the same thing in different words. If so, I think it's fine.
For instance we could return in output some info on the failed convergences, adding the parameters that failed to converge in a output list or something similar.
We could, but note that the same parameter can be attempted to converge several times in a single SequentialConverger
, so one step not reaching convergence doesn't mean that the parameter hasn't been finally converged. I think that this can quickly be checked with the current output. The converged_parameters
output is specified as non required, but in fact it is always returned (it is an empty dict if no parameters are converged). Therefore you can always check "my_parameter" in outputs.converged_parameters
to check if it has converged.
But yeah, maybe we could check this ourselves in return_results
and return the list of unconverged parameters, seems like a good idea. I'm just saying that it's better to do it in the end because of the possibility of further steps succeeding to converge that parameter.
Probably the list of unconverged parameters is the most interesting output for a user, the rest is not necessary. If they want to see if some step has failed they can check for themselves.
Are you sure about this? Isn't the input from the last sequential step reused? Well maybe we are saying the same thing in different words. If so, I think it's fine.
Just to be very clear. Suppose you have in converger_inputs
the following:
parameters = Dict(dict={})
basis = basis
...
...
Moreover we have:
iterate_over=[
{'meshcutoff': [5, 6, 7] },
{'pao-energyshift': [0.02, 0.015, 0.01, 0.005, 0.001] },
],
If the convergence of 'meshcutoff' fails, the convergence of 'pao-energyshift' is run with empty parameters dict as input not with a parameters dict with 'meshcutoff' : 7 Ry.
Is it ok?
I will implement to produce an optional output where to find the unconverged parameters list, making sure that they are not converged at any stage in case the user attempted to converge a parameter several times. Then we will close this and release 1.1.1 when you approve.
If the convergence of 'meshcutoff' fails, the convergence of 'pao-energyshift' is run with empty parameters dict as input not with a parameters dict with 'meshcutoff' : 7 Ry.
Is it ok?
Yes. I was just pointing out that if
iterate_over=[
{'pao-energyshift': [0.02, 0.015, 0.01, 0.005, 0.001] },
{'meshcutoff': [5, 6, 7] },
{'kpoints_0': [0.02, 0.015, 0.01, 0.005, 0.001] },
],
and the mesh cutoff doesn't converge, still the converged energy shift is used.
Would you prefer that we take the last value even if it has not converged? Maybe that could be up to the user to choose.
@pfebrer96 I introduced the extra output listing the unconverged parameters.
Would you prefer that we take the last value even if it has not converged? Maybe that could be up to the user to choose.
I decided to leave as it is and throw a warning.
I wait your review.
@pfebrer96. Applied the suggested changes
Fixes #82. The parsing of
converged_target_value
output of a convergence workchain is now done only when the convergence has been successful. The outputconverged_target_value
is returned only if the attributelast_target_value
is present inself.ctx
. It might not be present if all the convergence workchains fail.