Closed piperfw closed 4 months ago
@piperfw Yes, that was the idea. I wrote this function as a zeroth approximation to a good parameter estimation and thought that future better approximations might want to make use of information in the system (such as looking at the eigenvalues as you suggest). I guess that having this optional argument makes sense for future-compatability, i.e. one can setup the code with passing the system and a future version of OQuPy might actually use that information. But to avoid confusion I agree that making a warning that it currently doesn't make use of the system information could be good.
Thanks @gefux, would something along the following lines be appropriate?
# sample points (trivial for t-independent system)
sample_times = [0, end_time/2, end_time]
# sample system Hamiltonian, dissipation rates and commutator with bath coupling
HS_samples = [sys.Hamiltonian(t) for t in sample_times]
gamma_samples = [np.max(sys.gammas(t)) for t in sample_times]
HE_com_samples = [comm(HS, bath.coupling_operator) for HS in HS_samples]
# get maximum eigenvalues/rates
HS_max = np.max([np.eigh(HS)[0] for HS in HS_samples])
gamma_max = np.max(gamma_samples)
HE_comm_max = np.max([np.eigh(HE_comm)[0] for HE_comm in HE_com_samples])
I would guess the subtle part is knowing factors to extract suitable dt
from these maximum scales e.g. dt~ HS_max
If HS commutes with HE, then we don't need to worry about Trotter error so maybe assume dt sufficient to resolve system dynamics (in a plot) is suitable.
As you write, I think the question is what to do with that information. Although for the case of [HE, HS] = 0 the timestep dt can be arbitrarily large one might still want to resolve the system dynamics (where the eigenvalues of HS would come in). I have been working on the basis that one would use TEMPO if the dynamics is not analytically solvable (as the commuting case would be) and that one is interested in dynamics on a time scale of the environment correlation function. Hence only the correlation function determines the timestep (under those assumptions).
I appreciate that the function is a bit cryptic (that's mostly because it avoids computing correlations again that have been computed already), so I'll roughly explain what it does:
_analyse_correlation()
calculates the integral of the correlation function for each interval using the trapezoidal rule. It also computes the integral for each interval divided into two smaller intervals by adding a timestep in between and using the trapezoidal rule twice. Then, for each interval the error is the difference between the two values (normalized by the full integral). The function _analyse_correlation()
then returns that list of errors and the cummulative integral (also normalized by the full integral).dkmax
.dt
Thanks! That is helpful (I had not reviewed _analyse_correlation()).
I have been working on the basis that one would use TEMPO if the dynamics is not analytically solvable (as the commuting case would be) and that one is interested in dynamics on a time scale of the environment correlation function.
I understand these assumptions but also that the possibility for approximations involving the system were left open via the optional BaseSystem argument. Here's an idea that is loosely analogous to steps you list above; perhaps let me know what you think:
end_time
_analyse_evolution()
(or _analyse_derivative()
etc.) calculates the maximum local error from e.g. using Euler's method vs one with half the time-stepdt_sys
. It may also be possible to compare the error [H_S,H_E] dt**2
at each step.
This seems reasonable to me.
If one was to redesign / extend the guess_tempo_parameters() function, I suggest to organize it in smaller pieces which each return upper or lower bonds on the parameters which are then intersected in the main function.
For example: currently we calculate the upper bound on dt
based on the structure of the correlation function. The change of a Hamiltonian over time of a time dependent system as you suggest would lead to some different upper bound, etc. which are then intersected (i.e. one would need to take the minimum of the two upper bounds). The same with and 3rd or 4th reason we would come up to restrict dt
in any way.
Thanks for the input, agree intersection is the logical way to handle multiple parts. I would like to work on this if I find the time.
I have a branch pr/parameters which calculates a maximum frequency of a system object and sets dt
to the Nyquist rate if that is smaller than the dt
calculated from the bath correlations.
In examples/guessing_parameters.py
there are three examples I was using to test the utility of the system frequency calculation. I'll summarise the results here. In the following 'sys' refers to the parameters having included the system in the guess_tempo_parameters
call, and 'nosys' for the parameters without.
A - spin-boson model based on that in that Quickstart tutorial
0.5 * sigma_x
such that 'nosys' and 'sys' are the same ii. a higher frequency 5 * sigma_x
system Hamiltonian and iii. a higher frequency system Hamiltonian (5 * sigma_x
) plus a large dissipation rate ~5 L[sigma_m]
dt
curve is accurate just less smooth due to fewer sample points. This doesn't seem all that useful, because this kind of under sampling is easy to spot and correct (choose a smaller dt
) by eyeB - Pulse Hamiltonian adapted from PT-TEMPO tutorial
guess_tempo_parameters
similar results to ii.-iii. in that the behaviour is correct with the larger time step just clearly coarse grained. I expect what is saving the coarse grained calculation is that the Hamiltonian is being integrated (rather than just sampled) when constructing the propagators - may be interesting to check that *C - Spin boson with a time-dependent Hamiltonian `Delta(t) sigma_x`**
dt
result differs significantly. The smaller dt
calculated from the system frequency appears to do well to capture the time dependence of the Hamiltonian (here I included a reference calculation at a far smaller dt
to check):Overall, I think example C alone is probably fair reason to make this a useful feature to have. If the user doesn't pass a system object to guess_tempo_parameters
, a calculation is just done using the bath correlation functions. If the user passes a system object, then the smaller dt
gets chosen. A message should inform the user whether the bath or system was the 'limiting' aspect (i.e. required the smaller dt
), plus the usual warning about no guarantee of convergence, please check results manually.
@gefux if you get a chance to look at these results, you may let me know what you think? The code I've added will probably need some tweaking, but the physical idea is there.
Additional information:
tolerance
(currently the same as used for the bath part) or a maximum MAX_SYS_SAMPLES
is reachedexp(i pi/4)
is used as a 'typical' field value (which may not be true)Solved by #124
Currently
oqupy.guess_tempo_parameters
takes an optionalBaseSystem
object but does nothing with it.@gefux Was the idea with this parameter to compute the system-environment commutator and so infer a suitable timestep to avoid Trotter error? A simple thing we can do is look at the eigenvalues and rates of the system Hamiltonian and Lindblad terms, although I understood the Trotter error was normally more important to characterise.
In the meantime we should probably remove the optional argument or add a NotImplemented note or similar. Overall I think guess_tempo_parameters is a potentially a very useful function for people starting out with OQuPy.