desihub / desitarget

DESI Targeting
BSD 3-Clause "New" or "Revised" License
18 stars 23 forks source link

we should be setting subpriority in a reproducible way #337

Closed moustakas closed 6 years ago

moustakas commented 6 years ago

I'm spinning this issue off from https://github.com/desihub/fiberassign/issues/137, although #205 and https://github.com/desihub/fiberassign/issues/78 are related.

In the output target catalog the subpriority column is populated when we run bin/select_targets in io.add_subpriority by drawing random numbers from [0-1).

  1. Should we be using a random seed (perhaps tied to the imaging data release or the git hash/tag?) so these priorities are reproducible? Maybe this doesn't matter...

  2. Shouldn't we have a more general model, whereby the subpriority "concept" gets set in, e.g., targetmask.yaml and then generated on-the-fly when making the merged target list in mtl.make_mtl, e.g., in a new targets.add_subpriority method?

Thoughts?

sbailey commented 6 years ago
  1. Should we be using a random seed (perhaps tied to the imaging data release or the git hash/tag?) so these priorities are reproducible? Maybe this doesn't matter...

In principle, having it be reproducible would be nice. In practice I'm not sure it matters.

  1. Shouldn't we have a more general model, whereby the subpriority "concept" gets set in, e.g., targetmask.yaml and then generated on-the-fly when making the merged target list in mtl.make_mtl, e.g., in a new targets.add_subpriority method?

I'm nervous about this one. The subpriority should be added at the very beginning as a static property of the target itself, not something that gets updated on-the-fly by MTL. I don't see any benefits to generating it on-the-fly (but feel free to educate me...) but I do see big difficulties in traceability / reproducibility if MTL is setting subpriorities.

I could see an SV-specific case of letting MTL scale subpriorities for the purposes of adjusting effective post-fiberassignment target densities; perhaps that is what you are referring to. I'd still let TS write the original random subpriority [0,1) and have any scaling as a algorithmically traceable adjustment to that original subpriority (i.e. the only random number generation would still be in a single place in target selection and written into the targets file).

Opinion: we should not scale at all for the real survey -- the cuts should be adjusted during SV to achieve the desired densities, not post-facto subpriority manipulations or downsampling.

moustakas commented 6 years ago

I'm nervous about this one. The subpriority should be added at the very beginning as a static property of the target itself, not something that gets updated on-the-fly by MTL. I don't see any benefits to generating it on-the-fly (but feel free to educate me...) but I do see big difficulties in traceability / reproducibility if MTL is setting subpriorities.

I understand this answer, @sbailey, but I'm confused because (I think) it contradicts what you wrote in https://github.com/desihub/fiberassign/issues/78, where you advocated for using SUBPRIORITY to rank-order standard stars (e.g., STD_WD should trump another generic standard star) during fiber assignment, rather than a simple (random) tie breaker.

One alternative proposal -- which would not require or change the current usage of SUBPRIORITY) discussed on-list (see this thread) was to add white dwarfs as dark-time targets in targetmask.yaml, essentially ensuring that they will be targeted always at the highest priority both as science and calibration targets.

However, we would need consensus from the Targeting WG that this change is acceptable. Other thoughts?

sbailey commented 6 years ago

Good catch. I just updated #78 to change the opinion I expressed there.

Making WD high-priority low-density normal targets is probably fine (though ultimately as you say the decision has to come from TS and the science working groups, not Data Systems). At the same time, I'm not convinced that a low-density of even perfect calibrators really helps that much given the other systematics of fiber losses. We fundamentally need to be averaging over fibers even if a handful of them are astrophysically great calibrators.

moustakas commented 6 years ago

True, but we still want to observe white dwarfs over ~F stars (with potentially high metallicity) during normal operations, and since we're not ranking standards by subpriority we have no other way of guaranteeing this to happen in dark time (since standards don't have a PRIORITY and MWS_WD only get observed in bright time).