Open max-sixty opened 3 years ago
Thanks for your kind words about our little project. Please be warned, that there is not much maintenance going on for this package. If you need any bigger changes or urgent fixes on numpy_groupies for xarray, you'd be largely on your own to implement it.
The array
operation, that you mention above is mainly a front-end for np.split
. Probably we messed up something there. Creating a ton of array objects is quite a time waster, so we rarely use it on our own. Mb you want to have a look at it on your own, patches welcome.
About the sum
, you can provide nan
as fill_value
aggregate(group_idx[:, :-1], flat[:-1].astype(float), "sum", size=(3, 4), fill_value=np.nan)
>>> array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., nan]])
Btw. fill_value=[]
fixes the first issue. Looks like we need to improve the defaults selection.
Thanks @ml31415
For background, here's the xarray issue discussing using npg: https://github.com/pydata/xarray/issues/4473
FYI re the stacking — I implemented that using normal numpy, which worked out OK (https://github.com/pydata/xarray/pull/4746) — my sense is that it's not too different a result from using this library, let me know if that's missing something though.
I'll have a go at some point of finishing up https://github.com/pydata/xarray/pull/4540. The main issue there are around our historical groupby implementation — which mostly based around a python loop — rather than any issues with npg.
Ack re not much maintenance going on. To the extent you're accepting PRs, I think we'd still be interested in having this as at least an optional path for xarray — it would make our code significantly faster, and simpler. I'd be interested if your sense is that the current features are robust, or would require more work.
Thank you — Max
Of course PRs are welcome any time. I'd also be happy to add you to the committers, if you want to take over some long term responsibility.
About the robustness of the current features, please have a look, if the unit tests are sufficient for your use case. The major part of the test suit compares the results of the optimized implementations against a generic implementation using only numpy functions. I use the library in production for years, and errors there would have be quite costly to me. But as we just saw with the examples you came up with, that only holds true for the more frequently used features of this library, and the tests might need some extensions for your use cases.
About stacking, not sure if I got your question right. Internally npg loops naively over two 1D arrays. All else is handled with plain numpy functions to prepare everything in 1D shape before the action starts, and restore the original shape afterwards. So there are probably no speed gains hiding, see https://github.com/ml31415/numpy-groupies/blob/786a78b2da9d8df94ac373cdf11eef93ac723a8c/numpy_groupies/utils_numpy.py#L192 .
The code itself is a bit "grown", and some parts might need some refactoring. The main reason why the weave part is still there is to have a speed reference for the numba implementation, so it didn't see new group functions for a while.
@dcherian @d1manson What do you think: Shall we change the default fill_value
from 0 for all operations to something operation-specific?
array, sort -> []
all, any, anynan, allnan -> False (unchanged)
argmin, argmax -> -maxint
mean, var, std -> np.nan
min, max, cummax, cummin -> np.nan
last, first -> np.nan
len, sum, cumsum, sum_of_squares -> 0 (unchanged)
prod, cumprod -> 1
generic -> np.nan
It would change the current API a bit, but might be worth it for "doing the right thing" automatically. Any better suggestions for the values?
Any better suggestions for the values?
With flox I think set it to numpy_method([np.nan, np.nan])
so that's one option.
numpy_groupies
treats groups with all-NaN members and groups with 0 members the same (since NaN values are removed before reducing), so this could be an OK choice.
EDIT: the confusing thing will be np.sum([np.nan, np.nan]) = np.nan
or np.nansum([np.nan, np.nan]) = 0
. Which one to pick ?
We might have different default fill_value
s for sum
and nansum
as well.
Firstly thanks for the impressive package. We're considering using it in https://github.com/pydata/xarray to provide faster groupby operations.
It looks like some forms of stack / unstacking are supported too, if I'm looking at "Form 4" in the readme. Is it currently possible to supply a subset of the indices as part of that?
This works well:
But this doesn't:
Notably, supplying
sum
does compute, though the result has0
rather thannan
:Ideally the "array" case above would return:
Of course, if this library — as per the name — is more focused on groupby than stacking, totally reasonable to close this as wontfix.
Thanks!