qiboteam / qibocal

Quantum calibration, characterization and validation module for Qibo.
https://qibo.science
Apache License 2.0
32 stars 7 forks source link

Error values of T2 #1023

Open HishamKazim opened 2 days ago

HishamKazim commented 2 days ago

This issue concerns the large discrepancy in the error values of T2 as shown in the PR below:

https://github.com/qiboteam/qibolab_platforms_qrc/pull/191

The error is greater than the actual T2 value (much greater).

alecandido commented 2 days ago

In general, having an uncertainty greater than the value is possible, but it means that the value is compatible with 0 (so it's either vanishing, or you don't even know it's sign).

And this is clearly not the case of a fit like the following: image it is pretty clear that the data are exponentially decaying, and the fit manage to match the data pretty well. So, you certainly have no doubt on the sign of the exponential (ascending or descending), while the value reported is 30 +/- 40 us.

http://login.qrccluster.com:9000/qYLzOD1ET9yCXMAr0ZsNmA==/#T2

We're not consuming in any way the uncertainty on parameters during the plotting, while the central value is used to produce the fit. I'd say that the plot of the fit confirms the central value was correctly found. The uncertainty should definitely be investigated.

Edoardo-Pedicillo commented 1 day ago

I have done a small investigation, the parameters and errors of the fit of the normalized data (perr and popt in the code https://github.com/qiboteam/qibocal/blob/5ff3d907acf67f77be12c91544575bacd600152e/src/qibocal/protocols/ramsey/utils.py#L88 ) are

popt = [0.00342815,  1. ,         1.2564083,  1.72139401, 1.70591943]
perr =  [0.37864146,  0.04844122,  0.23573182,  2.62129502,  2.53873094]

even though, the fit is good curve_fit is not able to estimate the errors properly.

I have tried to remove the sinusoidal dependency from the fit function and remove the data normalization on the y axis (probabilities):


   def ramsey_fit(x, offset, amplitude, delta, phase, decay):
       """Dumped sinusoidal fit."""     
       # return offset + amplitude * np.sin(x * delta + phase) * np.exp(-x * decay)
       return offset + amplitude * np.exp(-x * decay)

and I have obtained better results

popt = [ 0.46887992  0.47956018 25.69336208  0.          1.71742433]
perr =  [0.00568939 0.00512151 0.         0.         0.04029972]

It follow the report in the second case: 2024-10-24_17:58:15

The last value in the popt list is the one we use for the evaluation of T2, it differ slightly in the two cases, but the second case should be the most reliable one (according to the physics laws of the system). So it is clear that the problem is related to how curve_fit evaluates the errors, I have tried to dig deeply but I was not able to find something significant.

alecandido commented 1 day ago

Ok, I was not aware of the sinusoidal dependency was present even in the fit of T2. I would have thought it was just for the Ramsey detuned.

My potential explanation for what is happening is the following:

At this point, you get two different terms that could actually both fit the exponential decay, and you develop a flat(/flatter) direction, which is a combination of the two terms. Since the error is estimated with the inverse curvature near the minimum, you find a minimum at a random (noise-driven, or at least noise-prone) point in the flat direction, and the curvature is very large. If the flat direction is neither the sine nor the exponential, but they are sufficiently similar to each other to be a combination of the two (i.e. the eigenvector of the Hessian in the parameter space), the wide curvature projects on both directions (exponential parameters and sine parameters), yielding large errors for both.

Most likely, the best is just freezing the sine while measuring T2. As you did in your experiment.

Edoardo-Pedicillo commented 19 hours ago

Thanks @alecandido for the explanation.

Ok, I was not aware of the sinusoidal dependency was present even in the fit of T2. I would have thought it was just for the Ramsey detuned.

The sinusoidal formula is used only in the Ramsey experiment(s), indeed what is shown by @HishamKazim is a Ramsey with detuning 0.

Most likely, the best is just freezing the sine while measuring T2. As you did in your experiment.

@HishamKazim can you try to repeat the T2 evaluation using the T2 routine (https://qibo.science/qibocal/stable/protocols/t2/t2.html) and upload here the results ?

alecandido commented 18 hours ago

Yes, sorry to have said something incorrect

Ok, I was not aware of the sinusoidal dependency was present even in the fit of T2. I would have thought it was just for the Ramsey detuned.

The sine is just in Ramsey, the T2 routine @Edoardo-Pedicillo is proposing is actually just the exponential decay. And that's correct

andrea-pasquale commented 18 hours ago

We had similar discussions in the past about this. The reason why we use Rasmey to measure T2 is that you cannot know before running the experiment whether your drive frequency is sufficiently close to the actual qubit frequency to avoid oscillations. In fact, if you fit with just an exponential and there are oscillations the fit is most likely going to fail (we saw this especially for the monitoring...). At this point we can try to pre-process the data before doing the fit in order to choose whether we should perform an exponential fit or an exponential fit + sinusoidal term. On the top my head I'm thinking that we could do a FFT to detect if there is an oscillating term, if we find it we fit also with a sin if not we just fit with the exponential. Thoughts?

alecandido commented 17 hours ago

What I would suggest is to do that in two steps: run with detuning, even a small one, to measure more precisely which it is (which is what @HishamKazim already did in http://login.qrccluster.com:9000/qYLzOD1ET9yCXMAr0ZsNmA==/#Ramsey%20Detuned), and then run pure T2, to fit the exponential decay.

In principle, you could do that even in a single protocol, but there is no advantage over the composition. I would try to rerun the same calibration reported above, just using the T2 routine with pure exponential decay. If the oscillations are very mild (because you removed the sine before), you should still be able to fit quite precisely the exponential decay. Even if the chi2 may skyrocket for small errors and non-negligible oscillatory behavior, the minimum should be identified very precisely anyhow, and the fit will not fail.

The direction I see for improvements is just to optimize the strategy to improve the oscillatory behavior as much as possible. I.e. everything that happens before the T2 protocol, and the pure exponential decay fit. But eventually, you should use the T2 protocol to extract the T2 value.

andrea-pasquale commented 17 hours ago

What I would suggest is to do that in two steps: run with detuning, even a small one, to measure more precisely which it is.

You need to be careful about a small detuning, given to get a valid correction on the qubit frequency you need be in the situation where the detuning is larger compared to the frequency correction. This is why we usually choose to a have a large detuning of a few MHz when we run ramsey.

In general you can always say something like "I will run with detuning and afterwards I will run just the T2 experiment to measure T2". However, in some cases even if you correct the qubit frequency T2 can exhibit a non monotonic behavior due to some spurious qubit-qubit coupling.

This is why I'm suggesting that we should do something within the protocol itself.

HishamKazim commented 16 hours ago

So I reran the ramsey, T2 (using the ramsey with detuning 0), and the T2 runcard as shown in the report below: http://login.qrccluster.com:9000/jX1y6NXDTvmbPHLEu5DIZQ==/. Let me know your thoughts @Edoardo-Pedicillo @alecandido @andrea-pasquale

Screenshot 2024-10-25 at 12 49 04 PM Screenshot 2024-10-25 at 12 49 18 PM
andrea-pasquale commented 15 hours ago

Thanks @HishamKazim for the plots. Indeed, this confirms that an exponential fit should be enough.

alecandido commented 14 hours ago

Indeed, this is showing that in this case the exponential fit is doing its job. Not only because of the uncertainty reduction, but also because the central value of the exponential decay shifted more than its error in the pure exponential fit[*].

However, this is just a phenomenological evidence that is working fine for this specific case.

My personal advice would be to keep using consistently this strategy (that in this case worked well enough) and then accumulate evidence for when it's failing, in order to improve consistently the strategy. Most likely, the current one is not perfect at all. But without an example of how it could fail, it is harder to improve.


[*] It is interesting the shift in the central value in the decay, and makes pretty clear the sine contamination. However, to complete the empirical proof (i.e. attribute the responsibility for this specific case), we should actually fit the same data with the two routines, instead of acquiring again. Though this would be the systematic way to proceed, the data seem stable enough across the two acquisitions (at least visually) that the refined procedure may be just a waste of time. Especially because it's not simple to achieve with a runcard, and currently non-trivial even with a script -> but it's an idea to improve scripts capabilities!