Open astellhorn opened 2 months ago
As the errors indicates, the function expected Q, but there is only Qx and Qy. So the question is: Do we need to compute Q, or should the function be rewritten so it uses Qx
and Qy
?
Yes actually Qx and Qy is even better here. We thought it does in principle not matter for the direct beam as it should be symmetric but now I also remember cases where e.g. the slits are not symmetric such that also direct beam is not symmetric. Also, we will handle the other data rather in Qx Qy instead of Q, so its better to be constant there I guess.
Can I simply redefine the q_range of def_compute_direct_beam from from currently 1D to 2D via changing
def compute_direct_beam(
data: sc.DataArray,
q_range: sc.Variable,
background_q_range: sc.Variable,
) -> sc.DataArray:
"""Compute background-subtracted direct beam function."""
start_db = q_range[0]
stop_db = q_range[-1]
start_bg = background_q_range[0]
stop_bg = background_q_range[-1]
# The input is binned in time and wavelength, we simply histogram without changes.
direct_beam = data.bins['Q', start_db:stop_db].hist()
background = data.bins['Q', start_bg:stop_bg].hist()
return direct_beam - background
to
def compute_direct_beam(
data: sc.DataArray,
qx_range: sc.Variable,
qy_range: sc.Variable,
background_qx_range: sc.Variable,
background_qy_range: sc.Variable,
) -> sc.DataArray:
"""Compute background-subtracted direct beam function."""
start_db_qx = qx_range[0]
start_db_qy = qy_range[0]
stop_db_qx = qx_range[-1]
stop_db_qy = qy_range[-1]
start_bg_qx = background_q_range[0]
start_bg_qy = background_q_range[0]
stop_bg_qx = background_q_range[-1]
stop_bg_qy = background_q_range[-1]
# The input is binned in time and wavelength, we simply histogram without changes.
direct_beam = data.bins['Qx', 'Qy', start_db_qx:stop_db_qx, start_db_qy:stop_db_qy].hist()
background = data.bins['Qx', 'Qy', start_bg_qx:stop_bg_qx, start_bg_qy:stop_bg_qy].hist()
return direct_beam - background
?
We cannot select 2 dims at the same time, so it should be something like
def compute_direct_beam(
data: sc.DataArray,
q_range: sc.Variable,
background_q_range: sc.Variable,
) -> sc.DataArray:
"""Compute background-subtracted direct beam function."""
start_db = q_range[0]
stop_db = q_range[-1]
start_bg = background_q_range[0]
stop_bg = background_q_range[-1]
# The input is binned in time and wavelength, we simply histogram without changes.
direct_beam = data.bins['Qx', start_db:stop_db]['Qy', start_db:stop_db].hist()
background = data.bins['Qx', start_bg:stop_bg]['Qy', start_db:stop_db].hist()
return direct_beam - background
assuming we want the same range in both X and Y.
Ok ! but then it still has to be:
q_range = sc.array(dims=['Q'], values = [0,0.1], unit='1/Angstrom')
start_db = q_range[0]
stop_db = q_range[-1]
?
It looks correct and is working, but if then I try computing the direct beam from the iofq of the example data, it seems to miss one dimension, it only yields Qy, but not Qx:
pl[PolarizationSetting] = '++'
iofq = pl.compute(IofQ[SampleRun])
iofq
Result:
pol.DirectBeamNoCell(
compute_direct_beam(
data=iofq,
#data = iqxqy,
q_range=q_range,
background_q_range=background_q_range,
)
)
Result:
Sorry about that, I forgot a second bins
, try this:
def compute_direct_beam(
data: sc.DataArray,
q_range: sc.Variable,
background_q_range: sc.Variable,
) -> sc.DataArray:
"""Compute background-subtracted direct beam function."""
start_db = q_range[0]
stop_db = q_range[-1]
start_bg = background_q_range[0]
stop_bg = background_q_range[-1]
# The input is binned in time and wavelength, we simply histogram without changes.
direct_beam = data.bins['Qx', start_db:stop_db].bins['Qy', start_db:stop_db].hist()
background = data.bins['Qx', start_bg:stop_bg].bins['Qy', start_db:stop_db].hist()
return direct_beam - background
You will see that the output has no binning now, since the event filtering selected sub-bins. If you need new Qx and Qy bins you have to bin
again.
(1) Can the ideas from #19 be included into esspolarization? Probably via exchanging the RunSectionLog by the idea in #19? (2) For the ISIS data, Johannes has inserted the readin of the different spin-channels, so probably I can use these as a start to test the workflow?
I am collecting ideas and testing at the same time, I hope you dont mind me posting my thoughts here, and if you have any suggestions to comment on it.
@SimonHeybrock in #19, what do you define as "background" and what as "SampleRun"? Is for you the SampleRun everything with sample in and Background Run everything else, e.g. the directbeam measurements through the He cells [depolarized, polarized]? Or is Background for you here the Background of the detector during each run, i.e., in a predefined Q-range far away from the direct beam and any peaks in I(Q)?
@astellhorn We have actually been working on a solution for the problem discussed in #19, but using a different, more general approach. It is nearly ready, but still needs some work. Is this something you need urgently right now?
@SimonHeybrock in #19, what do you define as "background" and what as "SampleRun"? Is for you the SampleRun everything with sample in and Background Run everything else, e.g. the directbeam measurements through the He cells [depolarized, polarized]? Or is Background for you here the Background of the detector during each run, i.e., in a predefined Q-range far away from the direct beam and any peaks in I(Q)?
I used terminology from regular unpolarized SANS measurements. Background runs are those that are subtracted from the final I(Q). I am not sure background-subtraction is performed in polarized SANS? And if it is, I don't know which components are in the beam for background runs.
@astellhorn We have actually been working on a solution for the problem discussed in #19, but using a different, more general approach. It is nearly ready, but still needs some work. Is this something you need urgently right now?
I was thinking of how to go on testing the workflow functions, for example the opacity. In the workflow we use the data from direct beam with and without cell, for which in principle the workflow uses the RunSectionLog to determine which run is what, so it would be nice to test it together with the correctly sorted data.
I can also go ahead first to use an easier example without having used the RunSectionLog before and just tell manually which data to use for the calculation of opacity from beamdata, but then I test only if the fitting algorithm itself works, not if it works together with the datastructuring and readout in which spinstate/in which positions etc. are analyzer + polarizer.
@SimonHeybrock in #19, what do you define as "background" and what as "SampleRun"? Is for you the SampleRun everything with sample in and Background Run everything else, e.g. the directbeam measurements through the He cells [depolarized, polarized]? Or is Background for you here the Background of the detector during each run, i.e., in a predefined Q-range far away from the direct beam and any peaks in I(Q)?
I used terminology from regular unpolarized SANS measurements. Background runs are those that are subtracted from the final I(Q). I am not sure background-subtraction is performed in polarized SANS? And if it is, I don't know which components are in the beam for background runs.
Ok, then I know! Thank you.
@astellhorn We have actually been working on a solution for the problem discussed in #19, but using a different, more general approach. It is nearly ready, but still needs some work. Is this something you need urgently right now?
I was thinking of how to go on testing the workflow functions, for example the opacity. In the workflow we use the data from direct beam with and without cell, for which in principle the workflow uses the RunSectionLog to determine which run is what, so it would be nice to test it together with the correctly sorted data.
I can also go ahead first to use an easier example without having used the RunSectionLog before and just tell manually which data to use for the calculation of opacity from beamdata, but then I test only if the fitting algorithm itself works, not if it works together with the datastructuring and readout in which spinstate/in which positions etc. are analyzer + polarizer.
The entire bit with RunSectionLog
in the workflow draft was written when I/we thought that everything will be in a single big event-data file. As the data you have now is not in this format (and maybe it is not even clear if it will be at ESS?), we should probably consider ditching that approach, for now at least?
@astellhorn We have actually been working on a solution for the problem discussed in #19, but using a different, more general approach. It is nearly ready, but still needs some work. Is this something you need urgently right now?
I was thinking of how to go on testing the workflow functions, for example the opacity. In the workflow we use the data from direct beam with and without cell, for which in principle the workflow uses the RunSectionLog to determine which run is what, so it would be nice to test it together with the correctly sorted data. I can also go ahead first to use an easier example without having used the RunSectionLog before and just tell manually which data to use for the calculation of opacity from beamdata, but then I test only if the fitting algorithm itself works, not if it works together with the datastructuring and readout in which spinstate/in which positions etc. are analyzer + polarizer.
The entire bit with
RunSectionLog
in the workflow draft was written when I/we thought that everything will be in a single big event-data file. As the data you have now is not in this format (and maybe it is not even clear if it will be at ESS?), we should probably consider ditching that approach, for now at least?
All people here assume (at the moment) that it will be done via single nxs files for each measurement. Thats why I thought it would be nice to integrate what you had proposed in #19.
@astellhorn We have actually been working on a solution for the problem discussed in #19, but using a different, more general approach. It is nearly ready, but still needs some work. Is this something you need urgently right now?
I was thinking of how to go on testing the workflow functions, for example the opacity. In the workflow we use the data from direct beam with and without cell, for which in principle the workflow uses the RunSectionLog to determine which run is what, so it would be nice to test it together with the correctly sorted data. I can also go ahead first to use an easier example without having used the RunSectionLog before and just tell manually which data to use for the calculation of opacity from beamdata, but then I test only if the fitting algorithm itself works, not if it works together with the datastructuring and readout in which spinstate/in which positions etc. are analyzer + polarizer.
The entire bit with
RunSectionLog
in the workflow draft was written when I/we thought that everything will be in a single big event-data file. As the data you have now is not in this format (and maybe it is not even clear if it will be at ESS?), we should probably consider ditching that approach, for now at least?All people here assume (at the moment) that it will be done via single nxs files for each measurement. Thats why I thought it would be nice to integrate what you had proposed in #19.
In preperation for the next steps some more detailed information in the new issue #43
Main goal: I am going step by step through the functions defined in esspolarization base.py and try to use them on the polarized test data inside the esssans workflow. The goal is to find out where to insert the polarized workflow into the unpolarized workflow, and how to treat Direct Beam (through He-cells) and Sample data, and also if the esspolarization definitions have to be changed.
What works
What does not work
Questions
module 'ess.polarization' has no attribute 'ReducedDirectBeamDataNoCell
Code for not working DirectBeamNoCell computation
Error Message