qilimanjaro-tech / qililab

Qililab is a generic and scalable quantum control library used for fast characterization and calibration of quantum chips. Qililab also offers the ability to execute high-level quantum algorithms with your quantum hardware.
Apache License 2.0
29 stars 2 forks source link

[QHC-565] Remove `check_data()` from calibration #746

Closed GuillermoAbadLopez closed 2 weeks ago

GuillermoAbadLopez commented 1 month ago

Do not merge this PR to main, until we properly test it with a new chip. (We will work with this branch for next weeks calibrations)

This PR is linked to https://github.com/qilimanjaro-tech/qililab-portfolio/pull/435, both need to work together!

IsaacLA12 commented 1 month ago

I know we are currently using the drift_timeout+previous_timestamp for the calibrate_all method in CalibrationController to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check.

However I think we can still get rid of the methods that load the previous execution results when initializing the node

GuillermoAbadLopez commented 1 month ago

I know we are currently using the drift_timeout+previous_timestamp for the calibrate_all method in CalibrationController to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check.

However I think we can still get rid of the methods that load the previous execution results when initializing the node

Definitely, I didn't see them! Removing them!


Ohh and actually, this kind of creates a bug:

Before it was fixed to 7200s, and I changed it to drift_timeout as you say (on purpose yes), but now, we could be skipping notebooks, that their dependants are being calibrated (previously didn't happen because all had the same time!).

So we should look into this, and do, like we did in maintain() previously, and check that all the dependants also have a bigger drift_timeout!

jjmartinezQT commented 1 month ago

I know we are currently using the drift_timeout+previous_timestamp for the calibrate_all method in CalibrationController to skip executed notebooks in case of re-launching again the whole execution, I think that feature is nice to have, just leaving the comments there just to double check. However I think we can still get rid of the methods that load the previous execution results when initializing the node

Definitely, I didn't see them! Removing them!

Ohh and actually, this kind of creates a bug:

Before it was fixed to 7200s, and I changed it to drift_timeout as you say (on purpose yes), but now, we could be skipping notebooks, that their dependants are being calibrated (previously didn't happen because all had the same time!).

So we should look into this, and do, like we did in maintain() previously, and check that all the dependants also have a bigger drift_timeout!

I think we should talk to Adrián tomorrow about what I mentioned to Guille, that there is a way to analytically read the result of an all XY that could tell us which experiments to run and which ones to skip, that might be a simple way to start having this functionality, he already started implementing the analytical analysis of the all XY, maybe we can pick it up.

GuillermoAbadLopez commented 1 month ago

However I think we can still get rid of the methods that load the previous execution results when initializing the node

Thinking again @IsaacLA12, I think we can remove the previous_output_parameters, but maybe I would still leave the loading of the previous results! In case you need to build the qubit/fidelity/parameters tables, out of the last calibration you just executed 🤔

Its a very corner case, and for automatic calibration is totally useless, but for manual testing can be useful maybe.. Dunno, what, do you think?

(for the moment I'll remove the previous_output_parameters though)

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 95.76%. Comparing base (4488086) to head (62a486b).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #746 +/- ## ========================================== + Coverage 95.04% 95.76% +0.71% ========================================== Files 267 266 -1 Lines 8762 8568 -194 ========================================== - Hits 8328 8205 -123 + Misses 434 363 -71 ``` | [Flag](https://app.codecov.io/gh/qilimanjaro-tech/qililab/pull/746/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qilimanjaro-tech) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/qilimanjaro-tech/qililab/pull/746/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qilimanjaro-tech) | `95.76% <100.00%> (+0.71%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qilimanjaro-tech#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

GuillermoAbadLopez commented 1 month ago

@IsaacLA12 check the last commit: https://github.com/qilimanjaro-tech/qililab/pull/746/commits/1c8b27623e6eeb8c7582dff962c53309117ebf57, I've changed picking the first output exported > to the last one, in a part of the code, but I remember we discussed this some time ago.

I did so, because we were doing this check twice in the code, and choosing different in each case [0] or [-1]. (One case I think its when there is jump of line and the other when there is not (both outputs same line)), so I've unified them, so we always chose the last one, as I think we agreed last time. Im just not sure if we did them different on purpose due to some possible problem somewhere... But that makes the documentation for user more complicated and doesn't really makes sense to do one thing or the other depending on the case...

linear[bot] commented 1 month ago

QHC-565 2) Remove current (unused) implementation of `check_data`

IsaacLA12 commented 1 month ago

I think I would still remove it, before it had a functionallity that it does not have now, we found out about that corner case can be "sorted out" with that but that problem is just like a presentation problem in a way that is only a problem on how we report the output, I think that if we need the previous calibration (for manual checking) just go to the report of the previous one and check the values in the notebook executions rather than relaunching a calibration just to obtain the tables, imo

So if you removed them, I think we wont miss them 🤓

GuillermoAbadLopez commented 1 month ago

... So if you removed them, I think we wont miss them 🤓

Yeah inside myself I know it :), lets discuss it further tomorrow, but we are aligned yeah

Maybe we remember something else it was useful for, like following a half stopped calibration or for generating the end tables in a similar case, etc... I would need to check though, lets just think it a bit more through tomorrow 👌

GuillermoAbadLopez commented 2 weeks ago

Tested on hardware, and was working nicely