Closed rajaiitp closed 2 days ago
Hey @rajaiitp thanks for this. Could you please clarify details here ?
From what i see, the add_noise method in readout.py allows for noise to implemented at the level of bitstrings rather than probability vectors. This means that sampling is happen before readout errors applied.
Adjoint differentiation isnt compatible with sampling, i understand. The goto method is thus PSR if one wants to implement readout errors. It would be way convenient if we can bypass sampling and apply readout errors directly on the probability vector instead.
I like to think that the best way to do is generate transformed probability vectors affected by readout matrix directly at the level of backend (instead of exact non noisy state vectors), so that diagonal observables can be measured with it
@gvelikova Could you please have a go at what this entails exactly ? That'd be helpful. Thanks. This feels very much a PyQTorch thing.
@rajaiitp @gvelikova It would be great to start on this.
Missed it, I will start working on this soon
Closing as not relevant anymore.
Requesting for readout errors to be applied without sampling
tensordot can be used to apply readout errors if pytorch is computing the probability vector without directly using statevector for computing expectation values (not sure if it makes it slower, but is only usable for diagonal observables)
This would help with training models faster using AD while keeping the effects of readout errors. Something which the usecase people would require sooner or later for robustness studies.