Currently the reference is reduced to $Q$ before it's applied to normalize the counts from the sample.
This is not correct, because what $\lambda$ and what detector pixels contribute to a particular $Q$-bin depends on the instrument settings of the run (the sample inclination $\mu$).
So if the reference measurement is particularly weak for one $Q$ because there are weak pixels contributing to that $Q$, that doesn't mean the sample measurement is weak for that $Q$ because the weak pixels might contribute to a different $Q$ in that run.
The solution (if I understand it correctly, perhaps double check with Jochen) is to reduce the sample and reference measurement(s) to coordinates that don't depend on the instrument settings, those could be $\lambda$ and $y$ (note that Jochen calls this $z$), and normalize there before binning in $Q$.
Currently the reference is reduced to $Q$ before it's applied to normalize the counts from the sample. This is not correct, because what $\lambda$ and what detector pixels contribute to a particular $Q$-bin depends on the instrument settings of the run (the sample inclination $\mu$). So if the reference measurement is particularly weak for one $Q$ because there are weak pixels contributing to that $Q$, that doesn't mean the sample measurement is weak for that $Q$ because the weak pixels might contribute to a different $Q$ in that run.
The solution (if I understand it correctly, perhaps double check with Jochen) is to reduce the sample and reference measurement(s) to coordinates that don't depend on the instrument settings, those could be $\lambda$ and $y$ (note that Jochen calls this $z$), and normalize there before binning in $Q$.