grzeimann / Goldilocks_Documentation

Documentation for the Hobby Eberly Telescope Automatic Pipeline of the Habitable Planet Finder Instrument
0 stars 1 forks source link

Discussion surrounding Telluric Correction #1

Open gully opened 3 years ago

gully commented 3 years ago

I'm delighted to see Goldilocks moving towards automated telluric corrections! πŸŽ† This is awesome! Here are a few thoughts:

Minor comments

the sky fiber is typically 93-97% of the throughput of the science

  • I get the same ratio with blank sky and twilight flats βœ…

We fit order by order for this stage to construct the best initial models that we can.

  • I think I would benefit from asking you about the mechanics of these steps.

Major or merely lengthier comments

  1. Static versus dynamic blaze

The blaze drifts in time, albeit negligibly for most orders. For a certain few orders in the deep water bands the blaze drift is more significant, so using a static template produces blaze corrections that jut up or down. In my experience these artificial mounds drive the choice of smoothing kernels, so it is good to avoid introducing them if possible. I think you mask those orders, which is probably the expedient thing to do since there is so little astrophysical information coming through them.

  1. Continuum normalization

Is the continuum normalization process you described the "percentile mapper" you've mentioned before? We now have 3 tutorials on continuum normalization with muler:

We have added a .flatten() method to muler that implements the Savitzky-Golay filter (inherited from code from lightkurve), and we have a .smooth_spectrum() that employs the Gaussian Process method with celerite.

  1. Fitting the residuals with a PCA basis

The concern with fitting an observed spectrum with a PCA basis is the extent to which the scientific signal is non-orthogonal to the PCA basis vectors. How much does the PCA-based technique cause signal-self subtraction in the stellar signal? I think the risk for such self-subtraction diminishes as the length of the spectrum grows. So in pathological cases that the spectrum is a single stellar line near a telluric line the self subtraction will be dramatic. In the limit that the spectrum is the entire 50,000+ pixel HPF spectrum, the underlying stellar spectrum shares almost no overlap with the TelFit-derived eigenspectrum, I think. One could quantify this self-subtraction risk by obtaining the PCA weights (eigenvalues) for a telluric-free stellar template smoothed and resampled to the HPF configuration. That's probably overkill, but would put some minds at ease.

  1. Parameter choices

You vary three parameters: airmass, O2, and H2O. Airmass and H2O make sense. I would have assumed surface temperature and pressure matter since they control the thermodynamics of the opacities through pressure and thermal broadening. My hunch is that adoping O2 is ~proportional to surface pressure, and choosing O2 probably has the benefit of taking up the slack in other model imperfections by relaxing the assumption that H2O and O2 compositions are pinned to each other. Varying these knobs is ultimately aimed at encoding different PCA spectra, the sums of which will probably capture the vast majority of the variance in any real telluric spectrum. I suppose the number of PCA vectors will increase with more tuning parameters varied, and the fraction of variance explained will go up as well.

  1. How did you set 15 components?

Was it based on fraction of variance explained? At what point do you worry about overfitting?

grzeimann commented 3 years ago

I'm delighted to see Goldilocks moving towards automated telluric corrections! πŸŽ† This is awesome! Here are a few thoughts:

Minor comments

  • LDLS -- what does the acronym stand for?

Laser Driven Light Source; I will clear this up in the documentation.

the sky fiber is typically 93-97% of the throughput of the science

  • I get the same ratio with blank sky and twilight flats βœ…

We fit order by order for this stage to construct the best initial models that we can.

  • I think I would benefit from asking you about the mechanics of these steps.

Ok, I will re-arrange the discussion of least squares minimization for the PCA components. Also, I made a mistake in my initial documentation. There are only 5 PCA components when I fit order by order as to avoid overfitting.

Major or merely lengthier comments

  1. Static versus dynamic blaze

The blaze drifts in time, albeit negligibly for most orders. For a certain few orders in the deep water bands the blaze drift is more significant, so using a static template produces blaze corrections that jut up or down. In my experience these artificial mounds drive the choice of smoothing kernels, so it is good to avoid introducing them if possible. I think you mask those orders, which is probably the expedient thing to do since there is so little astrophysical information coming through them.

I'm not quite sure what this means. I agree that blaze drifts with time, but the blaze should have a function related to the 2D orientation of the detector and not which channels have water bands. Perhaps I am confusing terminology with my use of the word blaze.

  1. Continuum normalization

Is the continuum normalization process you described the "percentile mapper" you've mentioned before? We now have 3 tutorials on continuum normalization with muler:

No, I don't use the same percentile mapper function. My new continuum normalization is very similar to that function I described before, but I found the percentile mapper to be less reliable than my new implementation.

We have added a .flatten() method to muler that implements the Savitzky-Golay filter (inherited from code from lightkurve), and we have a .smooth_spectrum() that employs the Gaussian Process method with celerite.

I have looked through these, and I think it great. The SG filter is very similar to what I am doing. Ultimately, my continuum mapping may need to enhanced for non-telluric standard stars where stellar features are more common.

  1. Fitting the residuals with a PCA basis

The concern with fitting an observed spectrum with a PCA basis is the extent to which the scientific signal is non-orthogonal to the PCA basis vectors. How much does the PCA-based technique cause signal-self subtraction in the stellar signal? I think the risk for such self-subtraction diminishes as the length of the spectrum grows. So in pathological cases that the spectrum is a single stellar line near a telluric line the self subtraction will be dramatic. In the limit that the spectrum is the entire 50,000+ pixel HPF spectrum, the underlying stellar spectrum shares almost no overlap with the TelFit-derived eigenspectrum, I think. One could quantify this self-subtraction risk by obtaining the PCA weights (eigenvalues) for a telluric-free stellar template smoothed and resampled to the HPF configuration. That's probably overkill, but would put some minds at ease.

I am using 13,000+ pixels in the fitting (when accounting for selected orders and wavelengths with enough telluric absorption to appropriate for fitting [<98.5% throughput]). I agree this is always a concern. The free choice of components and pixels for fitting plays a large role in whether you expect self-subtraction or not. I would say the large number of pixels, the small number of eigenbases, and the limitation that the PCA was constructed from physical models where the power is isolated to regions with telluric features puts my mind at ease. However, you get the best results if you combine a physical understanding of your source with the mathematical freedom of a PCA.

  1. Parameter choices

You vary three parameters: airmass, O2, and H2O. Airmass and H2O make sense. I would have assumed surface temperature and pressure matter since they control the thermodynamics of the opacities through pressure and thermal broadening. My hunch is that adoping O2 is ~proportional to surface pressure, and choosing O2 probably has the benefit of taking up the slack in other model imperfections by relaxing the assumption that H2O and O2 compositions are pinned to each other. Varying these knobs is ultimately aimed at encoding different PCA spectra, the sums of which will probably capture the vast majority of the variance in any real telluric spectrum. I suppose the number of PCA vectors will increase with more tuning parameters varied, and the fraction of variance explained will go up as well.

  1. How did you set 15 components?

Was it based on fraction of variance explained? At what point do you worry about overfitting?

It was based on fraction of variance explained and a by eye look at the fits of telluric standard stars. A more rigorous effort might be still warranted.

gully commented 3 years ago

Awesome! We had an off-line phone conversation including @jessicaluna, Greg, and me. The takeaways are as follows:

  1. Obtaining the mean telluric spectrum

We spent a while discussing how to obtain the average telluric spectrum, since it involves a few steps. We now understand the overall procedure to obtain the average Telluric spectrum---the black line with gray border. This step has two parts: obtaining the average TelFit-based spectrum and the average data-driven correction to this model-based spectrum. The two components act together or separatelty in different contexts analogous to a "pre-whittening step", the key idea is to remove the bulk trends to isolate residuals. The PCA acts on these "zero-centered" residuals, an essential prerequisite to get PCA to work effectively.

  1. Concern for overfitting

I am less concerned with overfitting now, since regions-of-interest can be masked before determining the PCA weights, and the weights are usually determined on (masked subregions of) multiple orders, totaling roughly 13,000 pixels. We discussed the prospect of validating the approach by quantifying the amount of overfitting through injection-recovery type experiments (harder), or simply just drilling down on a particularly well-understood signals , e.g. line depths or RV curves or whatever. For stellar spectra with lots of lines, the challenge becomes how to assign the continuum. The PCA approach should tend to overfit in those contexts, regardless of masking a region-of-interest.

  1. Path forward and connection to muler/gollum

We agreed that a path forward is as follows: the baseline PCA fit will be applied as a matter of course, as laid out in the documentation. An additional bespoke tuning of the PCA Eigenweights is possible on a per-user basis if, say, your spectrum has sharp lines overlapping lots of telluric lines. For those sources, muler or gollum may provide a mechanism to feed in a doctored-and-resampled stellar template when deriving the PCA weights. This method should mitigate some amount of overfitting and therefore provide better out-of-band prediction accuracy. The python method would consume the static PCA eigenweights and average spectrum, which are only 7 MB, and can be provided on this repo.