rodluger / Limbdark.jl

Analytical transit light curves for limb darkened stars
MIT License
14 stars 4 forks source link

Precision for high values of `n_u` #43

Open ericagol opened 5 years ago

ericagol commented 5 years ago

When there are a large number of limb-darkening coefficients, the computation becomes inaccurate. Not sure why, but some possible problems/solutions:

rodluger commented 5 years ago

@ericagol I'm finding that I can't do better than about 1 ppm error with the polynomial fit to the non-linear law. I'm using c_ = [0.2, 0.2, 0.2, 0.2] and finding that I'd have to go above l = 15 to get sub-ppm precision. But that's the point where numerical instabilities start pushing back; by the time I get to l = 25, the fit is significantly worse. This may not be an easily solvable issue. We'll just have to keep that in mind for when we actually start trying to fit stellar models.

ericagol commented 5 years ago

@rodluger What radius-ratio r do you use? I did find better than ppm precision for septic limb-darkening; you should be able to see this in the latest version of occult_nonlinear.jl. I found that the precision of the fit does depend on the choice of initial coefficients. And, the linearization trick might be good for seeding the initial values.

rodluger commented 5 years ago

OK, I'll play around with that a bit. I'm using r = 0.1 in this script to generate Figure 13 if you want to have a look. What's the linearization trick?

ericagol commented 5 years ago

@rodluger The idea is that given a basis of \mu^j terms in equation (59), and arbitrary I(\mu) function is a linear combination of these terms, and so the coefficients can be solved for analytically with a matrix equation (as the minimum of the chi-square for any linear model is a linear equation). In practice I think it's better to compute a limb-darkened transit for arbitrary I(\mu), and this can be expressed as a linear combination of S_n terms (in equation 63), so that the d_n coefficients can be solved for linearly. In practice this has the problem that it won't guarantee positive surface brightness, and it won't cause the flux outside transit to be unity (which is why there was that jump at the edge of transit that you noticed in my first fits, which were linear). The linearized fit is implemented in the function optimize_fit_linear within occult_nonlinear.jl.

rodluger commented 5 years ago

@ericagol Ah, I see. I'm already doing something along the lines of what you mention: I solve the linear problem, then feed that into a nonlinear solver as a guess (though in practice this second step doesn't help much). I'm minimizing the chi^2 of the fit to the intensity. I find that no matter how high a polynomial order I choose, I simply can't get the error on the intensity to improve significantly: image I'm already at a 500th order polynomial and I'm still at errors on the level of 1e-7. Is this simply the nature of the beast -- it's hard to fit the nonlinear LD with a polynomial -- or am I doing something wrong? I used this script to generate the figure.

rodluger commented 5 years ago

@ericagol As for your fits to the flux, that can certainly lead to less error for a particular problem. But in practice we're usually solving for the orbital parameters in an inference scheme, so it sounds like this would require re-fitting a polynomial at each step of the MCMC chain?

ericagol commented 5 years ago

@rodluger Right, the fractional errors in occult_nonlinear are in the flux, not the surface brightness.

I've looked into fitting the surface brightness with a polynomial before, and I think I obtained the same results as you are. The basic problem seems to be that the shape of the non-linear limb-darkening curve near the limb simply can't be fit to any desired accuracy with a polynomial limb-darkening function.

I suspect that if you wanted to approximate the non-linear limb-darkening with starry/limbdark, you could fit each of the non-linear limb-darkening components separately, and then I think you should be able to express arbitrary non-linear limb-darkening as a sum of these coefficients (perhaps applying Sturm's theorem to make sure that the surface brightness is everywhere positive).

ericagol commented 4 years ago

@ericagol @rodluger I'm wondering if the precision would improve if we use compensated summation.

rodluger commented 4 years ago

@ericagol What is compensated summation?

ericagol commented 4 years ago

It is a method for tracking rounding/truncation errors when you carry out a sum. I tried it out, and it doesn’t help, unfortunately.

I found that it helped immensely for my N-body integrator.

Eric Agol Astronomy Professor University of Washington

On Sep 30, 2019, at 3:20 PM, Rodrigo Luger notifications@github.com wrote:

@ericagol What is compensated summation?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.