rodluger / starry

Tools for mapping stars and planets.
https://starry.readthedocs.io
MIT License
142 stars 32 forks source link

Linear solve when one star has ydeg=0 #253

Closed smoh closed 3 years ago

smoh commented 4 years ago

Hi,

I am trying to do what is done in "Eclipsing binary: Linear solution for the maps" notebook (https://rodluger.github.io/starry/latest/notebooks/EclipsingBinary_Linear.html) I have two questions and one minor bug report.

  1. I want to set one of the star's ydeg=0 (constant surface map) and only fit for the other star's surface map. When I do that and run solve:
>>> mu, cho_cov = sys.solve(t=time)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<timed exec> in <module>

~/.conda/envs/rr/lib/python3.8/site-packages/starry/kepler.py in solve(self, design_matrix, t)
   1135             inds = slice(n, n + body.map.Ny)
   1136             body.map.amp = x[inds][0]
-> 1137             body.map[1:, :] = x[inds][1:] / body.map.amp
   1138             n += body.map.Ny
   1139 

~/.conda/envs/rr/lib/python3.8/site-packages/starry/maps.py in __setitem__(self, idx, val)
    207         elif isinstance(idx, tuple) and len(idx) == 2 and self.nw is None:
    208             # User is accessing a Ylm index
--> 209             inds = get_ylm_inds(self.ydeg, idx[0], idx[1])
    210             if 0 in inds:
    211                 raise ValueError("The Y_{0,0} coefficient cannot be set.")

~/.conda/envs/rr/lib/python3.8/site-packages/starry/_indices.py in get_ylm_inds(ydeg, ls, ms)
     36 
     37         if (ls.start < 0) or (ls.start > ydeg):
---> 38             raise ValueError("Invalid value for `l`.")
     39 
     40         # Loop through all the Ylms

ValueError: Invalid value for `l`.

it fails because ls is slice object slice(1,1,1) and ls.start>ydeg becomes 1>1.

I feel like there is no reason why this shouldn't be possible; is it just a special case that is not being handled in solve in packing parameters?

  1. There is a minor bug in this code setting the priors, I think:
    
    # Prior on primary
    pri_mu = np.zeros(pri.map.Ny)
    pri_mu[0] = 1.0
    pri_L = np.zeros(pri.map.Ny)
    pri_L[0] = 1e-2
    pri_L[1:] = 1e-2
    pri.map.set_prior(mu=pri_mu, L=pri_L)

Prior on secondary

sec_mu = np.zeros(pri.map.Ny) sec_mu[0] = 0.1 sec_L = np.zeros(pri.map.Ny) sec_L[0] = 1e-4 sec_L[1:] = 1e-4 sec.map.set_prior(mu=sec_mu, L=sec_L)



For secondary, I think it should be `sec.map.Ny` and this only worked because you generate this data with ydeg=5 for both primary and secondary.

3. Finally, in this and the other linear solve example (https://rodluger.github.io/starry/latest/notebooks/LinearSolve.html), it is assumed that ydeg is correctly set but do you have any advice on how one should set this with real data?

Many thanks.

**Your setup (please complete the following information):**
 - Version of starry: 1.0.0
 - Operating system: RHEL 7.8
 - Python version & installation method (pip, conda, etc.): python 3.8 in an isolated conda environment.
rodluger commented 3 years ago

@smoh So sorry for the long delay in addressing this. I'm finally sitting down and going through all the issues. Thanks for pointing this out. I implemented a simple fix for the two issues you raised in #270, which I'll be merging into master in the next couple days, with a new release in the next week or two. New unit test here.

As for your question, it really depends on the application. When in doubt, it's best to use as high a ydeg as you can (although too high can lead to a speed hit and -- worse -- severe numerical instabilities). The current algorithm in starry works pretty well for phase curves with ydeg as high as 35 or so, but for occultations, instabilities can kick in above ydeg of 20 (ish). So don't go any higher than that.

When your data is not very informative about high-degree modes (which is almost always the case) you'll want to regularize. One way is to place a Gaussian prior on the modes with variance that drops exponentially with increasing l. You can do that easily with map.set_prior() -- perhaps something simple like

def variance(l, tau=5):
    return np.exp(-l / tau)

lam = np.concatenate([np.ones(2 * l + 1) * variance(l) for l in range(map.ydeg + 1)])
map.set_prior(lam)

That places a prior on each of the spherical harmonic coefficients whose variance is equal to exp(-l / tau). Low-degree modes will therefore have the highest variance, and it will drop exponentially toward high l. This lets you use arbitrarily high ydeg without risking over-fitting. The tricky bit is figuring out what tau should be -- it's essentially the slope of the power spectrum of the map. You could try to use physics to predict what it should be or go hierarchical and fit for it -- depending on the dataset, you might be able to constrain it pretty well.

I'll keep this issue open until I get around to merging the changes into master, but that should only take a couple days. Let me know if you have any other questions.

smoh commented 3 years ago

Thank you for your response and for updating the code. I'll give what you said a try.