Open joesilber opened 4 years ago
This transform is coming directly from the online DB. It's generated by the script load_petal_alignments_from_db . The plan has been to replace this by a transform we would fit with desimeter, but this has not been implemented yet.
I would suggest different yaml files for totally different hardware configurations, like KPNO vs LBNL petal test, rather than different configs in the same file.
However, we can have several configurations in the yaml file to accommodate changes of petals, with range of dates for validity, like we have for the spectrographs calibrations, but I don't think we need that now.
So the ptl2fp() function would get an additional argument specifying the file name to use?
We have to think about hardware configurations more generally. It's not just petal-alignments.yaml
but also the coordinates of positioners and fiducials in the table fp-metrology.csv
, the transform from tangent plane to focal plane in raytrace-tan2fp.json
, the default transform from FVC to focal plane single-lens-fvc2fp.json
.
I would have different data subdirectories for totally different hardware configurations like LBNL test petal vs KPNO focal plane. We may use an environment variable for this, maybe with a default corresponding to the current data directory if the variable is not set.
Yes I see.
My inclination is to have a single file with all the different configuration options. Then when you run an analysis, you pick one of those options. It gives you a bunch of key/value pairs saying which data to use. Like in pseudo-code:
configuration file would contain:
{'config a': {'petal-alignments': '2019-12-16 KPNO'},
'config b': {'petal-alignments': '2020-04-21 LBNL'}
}
A data file like petal-alignments file would contain:
{'2019-12-16 KPNO': {the_data_from_KPNO_that_day...},
'2020-04-21 LBNL': {the_data_from_LBNL_that_day...}
}
But of course you guys have much more expertise in how you want these things handled. And I understand there may be configuration management tools you already have in place that you want to use.
I do think this issue may arise quite soon, when we start running desimeter for the spare petal at LBNL.
Documenting some thoughts before thinking some more:
py/desimeter/data
, i.e. it is at least containedTwo different versions of timestamps:
(1) is mostly what we want now, and we are still regularly post-facto updating the params to apply to previous data. If desimeter starts being use more for operations, then (2) becomes more important. Good versioning could cover that, but would be a pain if we ever needed to replay what we thought the state was on N>>1 different nights without wanting to checkout N>>1 different versions of desimeter. desimodel faced similar questions for fiberassign, and effectively only supports (2) without supporting (1).
I like a single 2D master table, with columns == every single configurable option and rows == config names. Options being sometimes a parameter, but more often a unique key that references some lower level dataset. If you want to run a different state, you make a new row. But I'm no expert on this stuff.
In
ptl2fp.py
, there is a utility function for getting petal alignment data: https://github.com/desihub/desimeter/blob/f3e2500d70e91ea72b180fb2ce5f6493ecc5b6d2/py/desimeter/transform/ptl2fp.py#L44-L51The file path is hard-coded for grabbing the alignment data. This will cause versioning confusion in either of the following future cases:
To avoid stateful confusion, I suggest that we have multiple configurations in
petal-alignments.yaml
. Identify each one by a unique key.And
ptl2fp()
function will need an additional, required argument providing this key.