Closed nkeim closed 10 years ago
Thanks for investigating. I will leave the README as it is, recommending numba 0.11 with numpy 1.7.1, until numba rights itself.
I see now how to do the "major rewrite" of _refine_numba()
: pass in a big results array (along with its length) to be changed in-place; split make all of the little length-2 vectors into _x and _y components.
It might be less work to do it in Cython. But that's only if Cython makes it easy to overload the function to accept images with a variety of dtypes
.
I'm concerned about the "out-of-the-box" experience now for someone who downloads the latest Anaconda and then trackpy
. Is there an easy conditional way to just keep numba
's mitts off of _refine_numba()
altogether?
From what I've read, conda doesn't make cython as painless as it does numba. (For example, if a Windows machine doesn't have the right C compiler, conda can't help it.) But am I no expert on that -- cython might be a viable choice.
The slow import time is bad. Maybe numba is compiling everything on import now. I can't find detailed release notes. Have you come across any? I pinged ContinuumIO's twitter account about it too. It might not be in the open.
See #83 for one approach that could serve in the meantime. Not sure if it passes local tests yet -- they are running really slow -- but import trackpy
is back to normal speed.
Take a look at this: https://twitter.com/teoliphant/status/433993752625942530
This prompts the question, "Why is this the new default?!"
Better question: why is this being announced via twitter?
also, get off my lawn.
Get off my lawn, indeed! :)
https://discussions.apple.com/thread/5472161?start=0&tstart=0
On Feb 13, 2014, at 3:07 PM, Thomas A Caswell notifications@github.com wrote:
Better question: why is this being announced via twitter?
also, get off my lawn.
— Reply to this email directly or view it on GitHub.
Travis himself invites us to post the broken code for priority attention: https://twitter.com/teoliphant/status/434076107738865664
@nkeim seems to understand the problem well enough, but I guess it doesn't hurt to get expert advice as it's offered. Hard to reduce _refine_numba to a self-contained concise question...
@nkeim probably could have said it better, but I wanted to jump at Travis's offer while it was fresh in mind. Feel free to chime in. https://groups.google.com/a/continuum.io/forum/#!topic/numba-users/-lNug_ZI2xs
Thanks! I added a link to a stripped-down, standalone version of feature.py that still causes the problem --- one of my failed experiments.
On Feb 13, 2014, at 4:46 PM, Dan Allan notifications@github.com wrote:
@nkeim probably could have said it better, but I wanted to jump at Travis's offer while it was fresh in mind. Feel free to chime in. https://groups.google.com/a/continuum.io/forum/#!topic/numba-users/-lNug_ZI2xs
— Reply to this email directly or view it on GitHub.
You may have noticed that just doing
import trackpy
withnumba
0.12 installed takes upwards of 30 seconds, and execution of_refine_numba()
also seems to take forever. After extensive study I've concluded that getting_refine_numba()
back up to full speed will require a major rewrite. It's pretty unbelievable that the new Anaconda release would include a version of one of their flagship projects that is such a drastic regression in performance, but there you go.For future reference: All of the trouble seems to be with
_refine_numba
; the numba subnet code is thankfully safe because it does not create any arrays and does not need any external math functions (such assqrt
).Our best bet is to stick to numba 0.11 until the numba team can bring their new releases back up to that standard.