Closed SimonHeybrock closed 1 year ago
But then I am unclear about what benefit this has over group[NXInstrument]
.
But then I am unclear about what benefit this has over
group[NXInstrument]
.
You mean the additions in this PR? It avoids a significant amount of boilerplate code, I think.
But then I am unclear about what benefit this has over
group[NXInstrument]
.You mean the additions in this PR? It avoids a significant amount of boilerplate code, I think.
But why not use group[NXInstrument][()]
instead of Instrument.from_nexus(group)
?
But then I am unclear about what benefit this has over
group[NXInstrument]
.You mean the additions in this PR? It avoids a significant amount of boilerplate code, I think.
But why not use
group[NXInstrument][()]
instead ofInstrument.from_nexus(group)
?
The former would necessarily load everything, whereas the approach implemented here allows for customization?
The former would necessarily load everything, whereas the approach implemented here allows for customization?
Do you mean because the dataclasses here do not have to include all fields in a nexus group? If so, wouldn't this be specific to a use case?
The former would necessarily load everything, whereas the approach implemented here allows for customization?
Do you mean because the dataclasses here do not have to include all fields in a nexus group? If so, wouldn't this be specific to a use case?
Yes, the idea is that each instrument would customize this, e.g., to load (ore ignore) specific instrument components.
I finally addressed some of the comments here. See answers and updates. As mentioned face-to-face, I am certain that this will require iterations. For now, we can avoid documenting and advertising it widely such that we can try it in practice for a small subset of our applications.
I cannot reproduce this error locally. Tried both the latest scippnexus
and 0.3.2. Can one of you reproduce this?
No, the tests pass on my workstation
The CI fail comes from a NumPy Deprecation warning, triggered by and old h5py (h5py=3.1.0). This was fixed one and a half years ago in h5py=3.2.0. Can we update @nvaytet, or is something (Mantid?) holding us back?
Yes, Mantid is forcing us to pin I think, although I have not tried in a while, but hdf5=1.10
is still required by the mantid conda recipe. See the discussion in Slack:
So.... should we suppress the warning instead? Or can we avoid the Mantid dependency in the CI?
Not sure, I've just asked on the Mantid slack if we could move hdf5
to a more recent version. We'll see what happens.
I don't think a change there would happen on a timescale that works for us. I think we should begin considering dropping Mantid from the CI, or finding ways to split things.
I think we may eventually decide to move this into
scippnexus
. I did it here initially, since I think we will likely need some iterations, and want to try how it integrates with implementation of data reduction workflows. Having it inscippnexus
would tie us more to releases and complicate this.