Open rmandelb opened 10 years ago
(1) would be easy to implement, I think: add a discrete_step
kwarg (or something) to the DistDeviate setup, and then just round the internal uniform deviate return value to the nearest multiple of discrete_step
. But I don't know if that's the more scientifically useful option.
(Um, the internal uniform deviate which gives you the 0<p<1
for use with the CDF, that is.)
Yes, (1) should indeed be that easy! Another question is whether anyone would use it... :)
For reference of why this is important, here is a quick script you can run on your machine to show you how long the setup takes for larger n Sersics:
import galsim
import time
nlist = [0.3 + 0.1*k for k in range(60)]
im = galsim.Image(32,32, scale=0.28)
psf = galsim.Moffat(fwhm = 0.9, beta = 3)
for iter in range(2):
tstart = time.time()
for n in nlist:
t0 = time.time()
gal = galsim.Sersic(half_light_radius = 1.4, n=n)
final = galsim.Convolve(psf,gal)
im = final.drawImage(image=im)
t1 = time.time()
print 'n = %f, time = %f'%(n,t1-t0)
tend = time.time()
print 'Total time = %f'%(tend-tstart)
gsparams = galsim.GSParams(xvalue_accuracy=1.e-2, kvalue_accuracy=1.e-2,
maxk_threshold=1.e-2, alias_threshold=1.e-2)
for iter in range(2):
tstart = time.time()
for n in nlist:
t0 = time.time()
gal = galsim.Sersic(half_light_radius = 1.4, n=n, gsparams=gsparams)
final = galsim.Convolve(psf,gal)
im = final.drawImage(image=im)
t1 = time.time()
print 'n = %f, time = %f'%(n,t1-t0)
tend = time.time()
print 'Total time with loose params = %f'%(tend-tstart)
On my machine, n=6.2 takes over 7 seconds on the first pass, but only 0.16 seconds the second pass. Loosening the accuracy requirements helps: these drop to 0.65 and 0.08 respectively. Smaller n are better, and n<2 are quite reasonable. But it would be nice if we could have some precomputation that would allow Galsim to be able to go directly to the more efficient calculations.
The question is how best to do the interpolation in 2 variables (n and k) to meet our accuracy requirements. This is the tricky research problem Rachel mentioned.
Cross referencing this useful comment by @dkirkby on a different issue about his efforts at tabulating the high-k functional form of the Hankel transforms. (I'd remembered that comment and thought it would be here in this issue, but it wasn't so I went looking for it...)
Based on a discussion with @rmjarvis and @esheldon today, it would be useful to have ways to circumvent the Sersic profile limitation that there is a significant setup time for every single new value of n. There were two things we discussed, one easier and one harder:
(1) Easier: If one has a p(n) they want to draw from, we could offer some discretization option to DistDeviate so that instead of varying n continuously, it draws from p(n) at some specific number of discrete values. Presumably if you tell it you want e.g. 100 different values from p(n) then it should choose those values to be evenly-spaced in the CDF (e.g., at p=0.01, 0.02, 0.03, ...) rather than evenly-spaced in n.
(2) Harder: we could tabulate the Hankel transforms that are the main setup cost, for some number of n values, and just interpolate between them. There is a tricky research problem here, which is to figure out what resolution in n is needed to get systematics below the desired level. Actually doing the tabulation and having code that can read them in shouldn't be hard.