Closed decibelcooper closed 6 years ago
I think what you are searching for is MultiSegmentations
https://github.com/AIDASoft/DD4hep/blob/master/examples/ClientTests/compact/MultiSegmentations.xml
Aha! Yes, I think this is what I need. Thanks!
As for my desire for 1D Segmentations, perhaps this is not shared by others. If that is the case, please consider this issue resolved.
Can you elaborate on your idea and desire for 1D segmentations. I find it a bit problematic, since you want to have a relation cellID<->localPosition and if you don't provide dimensions in the other direction I don't see how you envision to obtain this relation.
Correct me if I'm wrong, but I think the situation with CartesianGrid** for a strip detector would be like this: , where the blue trapezoid is the detector volume, the black rectangular boundaries are segmentation boundaries, and the Xs with circles are the cell positions. In this scenario, the segmentation boundaries in the horizontal direction are not useful, since the boundaries of the volume need to be probed to actually determine position uncertainty for hits. However, a generalized reconstruction code is not necessarily aware of this, so to be perfectly general, the code must always test to see if the sensitive volume boundaries are the limiting dimension.
My thought is that the natural way to save the generalized code from having to check the distance to boundary in all cases is to have 1D segmentations that only report a single cell dimension for use when the volume boundary is intended to limit the cell boundary. This would allow generalized reconstruction code to test for segmentation type in order to determine what it needs to do to calculate hit position uncertainty. These 1D segmentations would be interpreted like so: In this situation, the reconstruction code knows that the sensitive volume boundaries define the horizontal cell dimensions, and so it must calculate the distance to the boundary. Between the 2D and 1D scenarios, only the type and reported cell dimensions change; there is still a cellID<->position relationship.
I see, we have also on several other occasions discussed that segmentation should be aware of the volume they are attached to, but the initial design of DDSegmentation foresaw a strict separation of the segmantation functionality and the rest of DD4hep. However lately we agreed that segmentations will be absorbed in the core and eventually made aware of the volume they sit on. But there is at present no progress on this topic, and segmentations remain separated form volumes and hence your proposal with 1D segmentation is sadly not possible.
We would be however very happy of any volunteer contributions in this direction.
Thanks for your response, @petricm
I am thinking that the segmentation doesn't necessarily need to be aware of the volume. In the case of 2D segmentations which do not specify a third cell dimension, my code then uses the volume information to determine the limitation in that direction. The same could be done for 1D segmentations. The only thing that is gained by having the 1D segmentation types is a differentiation from 2D. I do not see that as any different than 3D vs 2D.
Best,
Can you give an example of this please:
In the case of 2D segmentations which do not specify a third cell dimension, my code then uses the volume information to determine the limitation in that direction.
To add my two cents to this discussion: adding 1D segmentation classes would be in principle straight forward and a logical extension to the 2D case we currently have. As @decibelcooper has pointed out for this to be applicable to strip detectors w/ stereo angle (more general with strip orientation not parallel to one of the main coordinate axes) this requires an additional coordinate transform ( rotation rather than translation/offset). I am not sure however that this will work in a reasonable way with our current SensitiveTrackerSD actions. The reason is that the segmentation is only virtual in the sense that it is used to compute the cellID that is associated to the hit - which in turn is made from several steps. This works nicely for a calorimeters where the cell size is typically large compared to the step sizes and edge effects (i.e. allocating energy to on cell, even though that some of the step energy was really deposited in a neighboring cell) are small. This is typically not the case for Si-trackers.
For the linear collider simulation and reconstruction we are following a different approach for trackers: the cellID consists of the volumeID and we store the true position of the tracker hit inside the volume - no segmentations used. At the reconstruction stage (which for us includes digitization) the actual geometry is taken from the dd4hep::rec::Surface
class assigned to the sensitive volume which defines the measurement directions u,v
(only u
for 1D) inside the wafer. With this you can have an arbitrary orientation of your measurement directions inside the wafer. For example see here for the ILD outer Si-Tracker with double strip layers with stereo angle:
https://github.com/iLCSoft/lcgeo/blob/master/detector/tracker/SET_Simple_Planar_geo.cpp#L303-L331.
Advantages of this approach are:
dd4hep::rec::Surface
classes for your track reconstruction including
Hi @gaede
Thank you for sharing this approach! I must admit that I've yet to be sold on Surfaces, since aside from the limitations of Segmentations that we are discussing, it appears to me that Surfaces are superfluous. Perhaps this is a good opportunity for me to understand the need for Surfaces. I am actually using Segmentations in the way that you are using Surfaces.
I have custom Geant4HitData, SD action, and Geant4OutputAction. I save every calorimeter and tracker step, ignoring segmentation (here is a Protobuf implementation of what I save: https://github.com/decibelcooper/proio/blob/576f49e2df6b44d7175109aaa15e1b5fb1a9e2a6/model/eic.proto#L29-L42). Then, I digitize the calorimeters along with trackers outside of the simulation using the Segmentations.
Aside from
am I missing any significant features by using Segmentations instead of Surfaces?
I cannot find built-in functionality to specify a strip detector with multiple sensitive layers with strips in different directions. Seems like there are two reasonable ways to accomplish this. Either 1) allow the specification of a readout coordinate transformation within the Volume, or 2) make custom detector constructor that rotates sensitive layers relative to each other while keeping required boundary after rotation. Option 1 would allow much more flexibility.
IMO there is also an issue with the lack of one-dimensional segmentation. Sure, 2-D segmentation could be used, but this then reports a strip length that is incorrect in general. Take the SiTrackerEndcap2 for example, where the sensitive volumes are trapezoids. In the case of the trapezoidal volume with strip readout, the reconstruction needs to probe the distance to volume boundary in order to calculate, e.g., a noise covariance matrix for the strip measurements. I think the solution to this is to have Segmentations that only report 1 dimension, indicating to reconstruction that the other dimensions must be determined in another way.
I will be working on a PR to resolve this unless I get feedback that indicates that I should do otherwise. Thanks in advance for any feedback!