astropy / coordinates-benchmark

Accuracy Benchmarks of different coordinate conversion codes. Maintainers: @astrojuanlu, @astrofrog, and @cdeil
https://www.astropy.org/coordinates-benchmark/summary.html
11 stars 12 forks source link

Measure speed of each tool for celestial conversions #13

Closed cdeil closed 10 years ago

cdeil commented 12 years ago

astropy is slow because instantiating the coordinate classes is slow, not because transformations as slow:

In [24]: from astropy import coordinates

In [26]: %timeit coordinates.FK4Coordinates(0, 0, unit='deg')
100 loops, best of 3: 5.68 ms per loop

In [27]: c =  coordinates.FK4Coordinates(0, 0, unit='deg')

In [29]: %timeit c.transform_to(coordinates.FK5Coordinates)
1000 loops, best of 3: 238 us per loop

@eteq Why is creating an astropy coordinate object so slow? Can this be sped up easily? Instantiating a Position object in astrolib.coords is 1000 times faster:

In [36]: import coords

In [37]: %timeit coords.Position((0,0))
100000 loops, best of 3: 4.89 us per loop
astrofrog commented 12 years ago

@cdeil - performance has been a low priority for now, but I agree we should improve it a little before 0.2 An easy way to track down where most of the time is spent would be to use line_profiler (http://packages.python.org/line_profiler/). Let us know if you decide to try this out and figure out what the bottleneck is! (I would normally have done this already, but this week is super busy for me).

cdeil commented 12 years ago

OK, I'll try to line profile it now.

astrofrog commented 12 years ago

Just a thought - I wonder whether the bottleneck is actually the units framework? I think most of the time is likely to be spent in SphericalCoordinatesBase._initialize_latlon so that would be a good first function to profile (line_profiler requires you to decorate any function/method to profile with a custom decorator)

cdeil commented 12 years ago

This was simple.

%prun coordinates.FK5Coordinates(0, 0, unit='deg')

shows that it spends all the time in pyparsing.

Using unit=units.deg instead of unit='deg' solved the speed issue:

%timeit coordinates.FK5Coordinates(0, 0, unit=units.deg)
10000 loops, best of 3: 27.4 us per loop

I'll adapt the coordinates-benchmark/astropy/convert.py script to use units.deg now.

I think it could make sense to mention this in the coordinates documentation. Or to fix it by having a fast "parser" for the common cases, i.e. check for 'deg', 'degree', 'rad', and 'radians' explicitly before invoking pyparsing. Thoughts?

astrofrog commented 12 years ago

I did not know about %prun! That's awesome. So it was indeed astropy.units? Let's just cc @mdboom in case he has any thoughts on this (as the poor performance of pyparsing might affect other parts of astropy that use units)

+1 on just manually parsing in this case, given that it improves the performance by a factor of 200!

astrofrog commented 12 years ago

Just going to cc @eteq since it would probably be easy to fix this in the #471 PR

astrofrog commented 12 years ago

Just to confirm that Unit parsing is the bottleneck:

In [2]: %timeit Unit('deg')
100 loops, best of 3: 4.86 ms per loop
eteq commented 12 years ago

I guess that's not surprising when I think about it - it has to try a variety of different possible formats and make a lot of decisions. But it might be that we'll want to look at ways to 'short-circuit' Unit in certain cases for performance's sake. Or, we may just want to leave this as an incentive to get people to use units.degree instead of 'deg'. @demitri will certainly be happy with that development :)

eteq commented 12 years ago

Oh, wait, @astrofrog, were you suggesting this manual parsing be done in coordinates (e.g. astropy/astropy#471), rather than in units? I suppose that would make sense (check for those three, and if its anything else, we fall back on ordinary unit parsing).

mdboom commented 12 years ago

Sure. I never thought of optimizing for object creation for units, because I didn't expect it to be used that way. Isn't this benchmark a bit misleading because if the user would want to deal with tens of thousands of coordinates they would just use a numpy array of values rather than creating so many coordinate objects? (I haven't been following much of the coordinates discussion, so I don't even no if that's possible in the current design).

I looked at a number of parsing libraries for Python before arriving at pyparsing. pyparsing was the only one that was mature, didn't require a preprocessing step, didn't require a C library and was a standard BNF parser. It is far from the fastest, but I didn't really consider speed an issue given that any given application is likely only to deal with on the order of a dozen units.

That said, there is some low-hanging fruit here. The easiest is that the parser handles these "simple" unit strings (where the string is only a single unit name) using a dictionary lookup. The next step might be (memory) cache the results of parsing, but then you have to be careful not limit the size of the cache dictionary to handle the case where the user does want to create thousands of different units. I think the former gets us 90% of the way there for 10% of the effort.

@eteq: I'd prefer not to add a simple check for the three coordinate types in coordinates itself -- that's just spreading the optimization out rather than putting it right in the core where everyone will benefit.

EDIT: Expect a separate pull request for the unit optimization shortly.

mdboom commented 12 years ago

See https://github.com/astropy/astropy/pull/506

astrofrog commented 12 years ago

@eteq - yes, I'm suggesting doing the manual parsing in coordinates, not in units, for the sake of performance.

@mdboom - at the moment, the coordinates object doesn't yet support arrays (and may not for 0.2), so we just want to make sure that it's not so slow that it will put people off from using it. In the end, this is a pretty special case, and I think we should deal with it at the coordinates level, but I just thought I'd mention this had come up, in case there were easy solutions for performance in general - and I think that having dictionary-based lookup would be great before trying to fully parse. If it's just as fast to do it in units as in coordinates, then obviously that will be ideal.

eteq commented 12 years ago

@mdboom @cdeil - I recommend we move further discussion of the optimizations in astropy to astropy/astropy#506 and/or astropy/astropy#471 . (I just put some comments there now.) That way we can go ahead and merge this PR when its ready without disrupting that conversation.

cdeil commented 11 years ago

Rebased on master and updated speed.txt .

astrofrog commented 10 years ago

@cdeil - I've updated to the latest vectorized Astropy API. Maybe you could rebase this since it would still be interesting?

cdeil commented 10 years ago

Rebased on master and updated speed.txt.

It seems astropy.coordinates is now roughly as fast as kapteyn.celestial, but still an order of magnitude slower than pyast.

If speed benchmarking would be useful for astropy.coordinates at this point, I guess the proper way to do it would be to add a series of common use cases to https://github.com/astropy/astropy-benchmarks/ and then look in detail where large improvements are possible by comparing to pyast. @astrofrog I would suggest to merge this PR now and create new PRs for such things.

astrofrog commented 10 years ago

@cdeil - great, thanks! The overhead in Astropy is due in part to dealing with Quantities. This has been the subject of a discussion on various astropy core issues. I think the priority at the moment is to implement more functionality, and then focus on optimization. An order of magnitude isn't too bad at least :)