Closed GuillermoAbadLopez closed 2 weeks ago
I know we are currently using the drift_timeout
+previous_timestamp
for the calibrate_all
method in CalibrationController
to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check.
However I think we can still get rid of the methods that load the previous execution results when initializing the node
I know we are currently using the
drift_timeout
+previous_timestamp
for thecalibrate_all
method inCalibrationController
to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check.However I think we can still get rid of the methods that load the previous execution results when initializing the node
Definitely, I didn't see them! Removing them!
Ohh and actually, this kind of creates a bug:
Before it was fixed to 7200s, and I changed it to drift_timeout
as you say (on purpose yes), but now, we could be skipping notebooks, that their dependants are being calibrated (previously didn't happen because all had the same time!).
So we should look into this, and do, like we did in maintain()
previously, and check that all the dependants also have a bigger drift_timeout
!
I know we are currently using the
drift_timeout
+previous_timestamp
for thecalibrate_all
method inCalibrationController
to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check. However I think we can still get rid of the methods that load the previous execution results when initializing the nodeDefinitely, I didn't see them! Removing them!
Ohh and actually, this kind of creates a bug:
Before it was fixed to 7200s, and I changed it to
drift_timeout
as you say (on purpose yes), but now, we could be skipping notebooks, that their dependants are being calibrated (previously didn't happen because all had the same time!).So we should look into this, and do, like we did in
maintain()
previously, and check that all the dependants also have a biggerdrift_timeout
!
I think we should talk to Adrián tomorrow about what I mentioned to Guille, that there is a way to analytically read the result of an all XY that could tell us which experiments to run and which ones to skip, that might be a simple way to start having this functionality, he already started implementing the analytical analysis of the all XY, maybe we can pick it up.
However I think we can still get rid of the methods that load the previous execution results when initializing the node
Thinking again @IsaacLA12, I think we can remove the previous_output_parameters
, but maybe I would still leave the loading of the previous results! In case you need to build the qubit/fidelity/parameters tables, out of the last calibration you just executed 🤔
Its a very corner case, and for automatic calibration is totally useless, but for manual testing can be useful maybe.. Dunno, what, do you think?
(for the moment I'll remove the previous_output_parameters
though)
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 95.76%. Comparing base (
4488086
) to head (62a486b
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@IsaacLA12 check the last commit: https://github.com/qilimanjaro-tech/qililab/pull/746/commits/1c8b27623e6eeb8c7582dff962c53309117ebf57, I've changed picking the first output exported > to the last one, in a part of the code, but I remember we discussed this some time ago.
I did so, because we were doing this check twice in the code, and choosing different in each case [0]
or [-1]
.
(One case I think its when there is jump of line and the other when there is not (both outputs same line)), so I've unified them, so we always chose the last one, as I think we agreed last time. Im just not sure if we did them different on purpose due to some possible problem somewhere... But that makes the documentation for user more complicated and doesn't really makes sense to do one thing or the other depending on the case...
I think I would still remove it, before it had a functionallity that it does not have now, we found out about that corner case can be "sorted out" with that but that problem is just like a presentation problem in a way that is only a problem on how we report the output, I think that if we need the previous calibration (for manual checking) just go to the report of the previous one and check the values in the notebook executions rather than relaunching a calibration just to obtain the tables, imo
So if you removed them, I think we wont miss them 🤓
... So if you removed them, I think we wont miss them 🤓
Yeah inside myself I know it :), lets discuss it further tomorrow, but we are aligned yeah
Maybe we remember something else it was useful for, like following a half stopped calibration or for generating the end tables in a similar case, etc... I would need to check though, lets just think it a bit more through tomorrow 👌
Tested on hardware, and was working nicely
Do not merge this PR to
main
, until we properly test it with a new chip. (We will work with this branch for next weeks calibrations)This PR is linked to https://github.com/qilimanjaro-tech/qililab-portfolio/pull/435, both need to work together!