Open mikedeltalima opened 4 years ago
Here is a small example so you can reproduce this. Any advice welcome.
import numpy as np
import qinfer as qi
from qutip import Qobj, identity, sigmax, sigmay, sigmaz
def get_model():
basis = qi.tomography.pauli_basis(1)
model = qi.tomography.TomographyModel(basis)
return basis, model
def get_prior(basis):
prior = qi.tomography.distributions.GinibreDistribution(basis)
return prior
def get_state_from_prior(basis):
prior = get_prior(basis)
return prior.sample(n=1)[0].reshape(1, -1)
def get_measurement_ops(basis):
pauli_x_plus = np.sqrt(2) * (basis[0] + basis[1]) / 2
pauli_y_plus = np.sqrt(2) * (basis[0] + basis[2]) / 2
pauli_z_plus = np.sqrt(2) * (basis[0] + basis[3]) / 2
return pauli_x_plus, pauli_y_plus, pauli_z_plus
def get_experiment_params(basis, model, expparam_ops=None):
expparamsx = np.zeros((1, ), dtype=model.expparams_dtype)
expparamsy = np.zeros((1, ), dtype=model.expparams_dtype)
expparamsz = np.zeros((1, ), dtype=model.expparams_dtype)
x_plus, y_plus, z_plus = get_measurement_ops(basis)
expparamsx['meas'][0, :] = basis.state_to_modelparams(x_plus)
expparamsy['meas'][0, :] = basis.state_to_modelparams(y_plus)
expparamsz['meas'][0, :] = basis.state_to_modelparams(z_plus)
return {'x': expparamsx, 'y': expparamsy, 'z': expparamsz}
def simulate_experiment(true_state_mp, model, expparams, n_measurements=500):
results = []
for _idx_meas in range(n_measurements):
for e_name, e in expparams.items():
result = model.simulate_experiment(true_state_mp, e), e, e_name
results.append(result)
return results
def train_updater(model, prior, results, n_particles=4000):
updater = qi.SMCUpdater(model, n_particles, prior)
for result, e, e_name in results:
updater.update(result, e)
updater.resample()
return updater
def main():
np.random.seed(123)
basis, model = get_model()
prior = get_prior(basis)
true_state_mp = get_state_from_prior(basis)
expparams = get_experiment_params(basis, model)
results = simulate_experiment(true_state_mp, model, expparams)
updater = train_updater(model, prior, results)
if __name__ == '__main__':
main()
Hi @mikedeltalima thanks for the example. I haven't used the tomography stuff too much, so I'm not super familiar with the conventions it uses, but if you look at your updater
s posterior estimate of the covariance matrix, it is something like
print(updater.est_covariance_mtx())
[[-6.99440506e-14 -4.63518113e-15 9.76996262e-15 4.60742555e-15]
[-4.63518113e-15 8.18510888e-04 2.98436578e-05 4.46977743e-05]
[ 9.76996262e-15 2.98436578e-05 3.74699981e-04 1.34314729e-05]
[ 4.60742555e-15 4.46977743e-05 1.34314729e-05 7.78574974e-04]]
What we see is that the negativity is coming from the first parameter, which is your coefficient on the identity Pauli. This looks to be numerical error from 0, because your model or prior does not allow anything other that 1/sqrt(2). So my suggestion is to add some variance to your prior in this parameter (if it is sensible for you), or to disclude this parameter from your model (I'm not actually sure how difficult this is with the tomography module), or finally, I think this error can safely be ignored.
It can be ignored because having thin manifolds in your posterior is not actually a problem, just a numerical waste, and the warning is just randomly being triggered off of that. Note that if you look at, eg, your posterior mean (updater.est_mean()
), it's pretty close to the known true value.
Thanks @ihincks! Is there a way to parameterize my density matrices to get rid of the extra degree of freedom? Also, or alternatively, would it make sense to always return a positive covariance matrix? We could include a warning on a more verbose setting.
I'm not sure how to drop the identity out of your parameterization. There may be an easy way, but I don't see it after a quick glance. I personally wouldn't bother.
We probably want to change to something like the following check in utils.py:
if not np.all(la.eig(cov)[0] >= -1e-12):
@ihincks I can submit a pull request for that. Should there be an option to correct the covariance to zero?
Sounds great. Yes, I suppose we should clip the covariance if we're computing the eigenvalues anyway.
vals, vecs = la.eig(cov)
if np.any(vals < -1e-12):
warn
return (vecs * np.maximum(vals, 0)) @ vecs.T.conj()
(did not check)
I'm getting the following warning. Is it something I need to worry about?