Closed DPeterK closed 8 years ago
Dupe of #163?
have tied my brain in knots trying to follow the flow of execution that biggus follows to get to this point
The underlying problem is that during evaluation biggus chunks the source arrays and limits the to fixed number of bytes. So when two large sources have dtypes with different item sizes (i.e. number of bytes per item) the resulting chunks will have different lengths - e.g. for float32 each chunk will have at most MAX_CHUNK_SIZE
/ 4 elements, but for float64 each chunk will have at most MAX_CHUNK_SIZE
/ 8 elements.
The simplest workaround might be to switch to chunking with a fixed number of items (instead of bytes).
Dupe of #163?
Or possibly it has the same route cause but is showing itself in a different way?
With thanks to @matthew-mizielinski for originally pointing this out...
Using biggus operators to combine 'large' arrays with differing dtypes raises an error when biggus runs the specified operator on the chunks to be processed.
This is demonstrated by the following code snippet and the error(s) produced when I ran it:
If you make the shape of the arrays smaller or unify the dtypes of the two arrays then this error does not occur.
I've taken a little look into what might be causing this but have tied my brain in knots trying to follow the flow of execution that biggus follows to get to this point. So, instead of sitting on this for ages and trying to find a solution I figured it would be beneficial to raise this as an issue and also keep working on it.