After a discussion with @BrunoLiegiBastonLiegi, we suggest to divide the one_prediction function into the following two:
prediction without mitigation inside;
mitigated_prediction with mitigation inside.
This can be useful for adding a new level into our analysis: also the parameter shift rule makes large use of the predictions (basically it evaluates the observable twice, using two shifted parameters $\mu^{\pm}$) instead of the original $\mu$. With the purposed splitting we can:
perform the fit without mitigation;
perform the fit with mitigation only at the end (executing predict_sample with mitigated_prediction);
perform the fit with prediction while evaluating gradients and mitigated_predictions while calculating predictions;
perform the fit using mitigated_prediction each time a prediction is called.
After a discussion with @BrunoLiegiBastonLiegi, we suggest to divide the
one_prediction
function into the following two:prediction
without mitigation inside;mitigated_prediction
with mitigation inside.This can be useful for adding a new level into our analysis: also the parameter shift rule makes large use of the predictions (basically it evaluates the observable twice, using two shifted parameters $\mu^{\pm}$) instead of the original $\mu$. With the purposed splitting we can:
predict_sample
withmitigated_prediction
);prediction
while evaluating gradients andmitigated_predictions
while calculating predictions;mitigated_prediction
each time a prediction is called.