Closed klauer closed 3 years ago
Given the downsides here, I think the current algorithm remains more general and superior. While the refactor is not worth it, this remains an interesting implementation.
This issue would have been better suited for a GitHub "discussion", probably. Closing for now.
Current Behavior
Current calculation method exhaustively enumerates all possible states and finds the best option by way of matrix multiplication and argmin. With some caching tricks and relying on numpy to handle the multiplication, this is reasonably fast (~50ms/run) - fast enough to not be a bottleneck for how we use it.
Possible refactor
lcls_beamline_toolbox
, using the CXRO dataset, provides: https://github.com/mseaberg/lcls_beamline_toolbox/blob/e8104b4763d68be592395cbdae875bdfacd89a9d/lcls_beamline_toolbox/xrayinteraction/interaction.py#L196-L205 Allowing for calculation of required material thickness from desired transmission and (current) photon energy. From there, the filters to be inserted can be determined with a binary representation trick: the filters are arranged in order, each 2x of the previous. The lowest "bit" (least thick filter) determines the scaling factor - that is, diamond is 10 and silicon is 20.This is easily a couple orders of magnitude faster than the current method.
Notes
Context
Ref: https://github.com/mseaberg/lcls_beamline_toolbox Related: #47