sandialabs / pyGSTi

A python implementation of Gate Set Tomography
http://www.pygsti.info
Apache License 2.0
134 stars 56 forks source link

Model reparameterization changing outcome probabilities #425

Open juangmendoza19 opened 5 months ago

juangmendoza19 commented 5 months ago

calling set_all_parameterizations on a full TP model to a GLND model is changing some outcome probabilities non-trivially.

To reproduce:

`from pygsti.modelpacks import smq1Q_XY as std

datagen_model = std.target_model("GLND")

Arbitrary error where I observed the problem

error_vec = [0] * 48 error_vec[0] = .01 datagen_model.from_vector(error_vec)

design = std.create_gst_experiment_design(16)

Circuit with the maximum difference

bad_circuit = design.all_circuits_needing_data[394]

datagen_model_copy = datagen_model.copy() datagen_model_copy.set_all_parameterizations("full TP") datagen_model_copy.set_all_parameterizations("GLND", ideal_model=std.target_model("GLND"))

datagen_model.probabilities(bad_circuit)['0'] - datagen_model_copy.probabilities(bad_circuit)['0']`

Expected behavior The code above outputs a probability difference of -1.406968064276981e-08. This is a substantial difference causing issues in my current project which requires comparison of gauge-equivalent models.

Environment:

Additional context After an email exchange with Riley and Corey, Riley identified the problem in the state preparation. One of the vector entries deviates by 2.7057608985464707e-08 after conversion. This makes sense considering the model only has errors in the state preparation.

I believe I have identified the issue being in pygsti/modelmembers/states/initpy line 269. This scipy optimization returns the exact number above "2.7057608985464707e-08" as the error. I tried changing the tolerance of the optimization, but this did not seem to change its behavior.

juangmendoza19 commented 2 months ago

Update on this issue: I identified a second bug in this function. set_all_parameterization does not function properly either with errors on measurements. The error channels and provided ideal models are not used properly. The returned model never has measurement error, and the ideal measurement is instead the noisy measurement from the original model. @coreyostrove and I are currently working on fixing these.

Going back to the original state preparation problem, it seems like changing the optimization "method" parameter to "Nelder-Mead" solves the problem, although I don't know if using a different optimization algorithm will cause other problems.