reflectometry / refl1d

1-D reflectometry fitting
https://refl1d.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
18 stars 24 forks source link

Fittable resolution #121

Open pkienzle opened 3 years ago

pkienzle commented 3 years ago

[…] The one thing that this might throw up for constant dq/q measurement resolutions is if we can (and indeed if we should) fit the dq/q resolution directly (in in ideal world for well defined sample shapes the resolution should be known, however odd sample shapes that are over illuminated make it difficult to precisely determine it)? As I believe this would manifest itself differently to how sample broadening is applied to the resolution.

Originally posted by @acaruana2009 in https://github.com/reflectometry/refl1d/issues/111#issuecomment-845420089

pkienzle commented 3 years ago

I don't think you want fittable Δq/q exactly but rather fittable Δθ/sin θ.

Consider the case of a small irregular sample which is fully illuminated throughout the scan. Here the sample acts as a complex aperture, with each strip of sample acting as a slit based on the width of that strip. The overall angular divergence, then, by summing the angular distributions from the individual strips, scaled by the strip width. And for convenience, fitting that to a gaussian. The divergence will be roughly of the form (s + w)/(2d) where s is the other defining slit and d is the distance of that slit from the sample. Assuming s is fixed, then divergence increases roughly linearly with w, and w increases linearly with beam footprint sin θ, therefore Δθ = A + B sin θ for some constants A and B.

This is indeed different from sample broadening, which just modifies the constant A. So for the itty-bitty sample case we could add an additional fittable B, feeding this into (ΔQ/Q)² = (Δλ/λ)² + (Δθ/tan θ)² with fixed Δλ/λ and cover both the unknown aperture and unknown sample warp.

My guess is that the effect will be greatest at low Q, so yet more fudge to dump on the critical edge.

acaruana2009 commented 3 years ago

Hi Paul, Thanks for raising this as a separate ticket.

To confirm, the way we run POLREF, we keep a constant footprint, by scaling the slits to the θ angle. This means that our Δθ/θ is constant, we assume (probably not fully correctly) that for large enough dQ/Q values say larger than 1%, that the Δλ/λ term is negligible in comparison to Δθ/θ.

So in principle the sample should be acting as a slit as you describe, but with its effective slit opening also scaling to θ. So we calculate Δθ = arctan((s + sample_effective_gap)/(2d)) (neglecting sample broadening which is your A term), where sample_effective_gap = sample_length * sin(θ).

alexander-grutter commented 3 years ago

I think I see what you are getting at. You are right that this is a concern and I agree that it will have the largest effect at the critical edge. However, from an empirical standpoint at continuous wave sources I have found that sample shape does not meaningfully influence my fit quality near the critical edge (or overall). I seem to have just as many systematics in square vs. triangular samples, which would suggest that this is not a dominating factor currently.

My guess is that sample angle errors are our dominant factor.

pkienzle commented 3 years ago

@acaruana2009, yes I dropped the arctan to keep things simple. It's even more complicated than that. Unequal slits produce a symmetric trapezoidal distribution, with variance (w² + t²)/6. Also, our detector mask is tiny, so depending on the settings it might be controlling the divergence. Thus our reduction program uses the minimum divergence from all pairs of slits to compute Δθ, including the sample_effective_gap. The code is as follows:

def _divergence(i, j):
    s1, s2, d = slits[i], slits[j], distance[i] - distance[j]
    w, t = np.arctan(abs((s1 + s2)/2/d)), np.arctan(abs((s1 - s2)/2/d))
    return np.sqrt((w**2 + t**2)/6)
n = len(slits)
dtheta = min(_divergence(i, j) for i in range(n) for j in range(i+1, n))

Looking at Monte Carlo simulations for candor, the result is in good agreement for most slit settings, though it can be out by as much as 30% in some conditions. You can play by running python -m candor.simulate 1 5, which sets up an initial configuration at 1° with at 5:1 S1:S2 ratio. Using the sliders you twiddle the various slit openings and watch the θ distribution change. You can also see what happens when sample or slit are offset from the beam center.

This doesn't rule out tweaking Δθ by a linear term, but given that Alex isn't seeing any increased difficulty fitting non-square samples it sounds like this will be a minor effect so best not to include it. If you agree, then close the ticket.

acaruana2009 commented 3 years ago

For us, angular errors as a result of alignment (or beamline tracking) are not so much of an issue - we correct for them using our position sensitive detector. We do, however, have additional smearing to the resolution due to gravity - being a horizontal sample reflectometer (which we are not currently capturing and is another complicated problem we need to correct for).

Being a short pulse spallation source TOF reflectometer, our resolution considerations are different to that of a continuous wave instrument such as candor - I can't compare to us as I don't know how you guys operate it, @alexander-grutter do you guys increase your slits with θ?

In general, I used the case of over illuminating small samples of strange shapes to highlight where we cannot accurately determine our dq/q resolution - i.e. in this case our dθ/θ term. The question here is how much of an error in dq/q matters to fitting the reflectivity? I strongly suspect that is dependent on the samples you are trying to measure.

Since our dq/q resolution in principle is constant, yes I would expect significant effects at the critical edge, but also at sharp features across the reflectivity curve - such as fringes for thick samples and Bragg peaks - in fact when I have gone away from λ summing to coherent summing across our detector I have seen amazingly significant effects around Bragg reflections - resolving fringes that λ summing smears out even for very flat samples.

So I think what I am asking here is should we (to cater for TOF short pulse spallation source reflectometers), be able to fit the constant resolution term directly? As the sample broadening term as I understand it would not do the same thing. I think @christykinane and I need to discuss the other examples and use cases where this could be required and by how much not accurately determining the resolution effects our ability to fit the data.

@pkienzle thank you for sharing the candor simulations and equations - it looks interesting!