A very specialised bug with the variance calculation when producing masked arrays (smaller arrays, and accessing ndarray work fine):
>>> import biggus
>>> import numpy as np
>>>
>>> arr = np.empty((10, 400, 720), dtype=np.float32)
>>> r = biggus.var(arr, axis=1)
>>>
>>> r
<_Aggregation shape=(10, 720) dtype=dtype('float32')>
>>> r.masked_array()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "biggus/__init__.py", line 1555, in masked_array
result, = engine.masked_arrays(self)
File "biggus/__init__.py", line 448, in masked_arrays
return self._evaluate(arrays, True)
File "biggus/__init__.py", line 442, in _evaluate
ndarrays = group.evaluate(masked)
File "biggus/__init__.py", line 428, in evaluate
raise Exception('error during evaluation')
Exception: error during evaluation
>>> Exception in thread <biggus.StreamsHandlerNode object at 0x1949850>:
Traceback (most recent call last):
File "lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "biggus/__init__.py", line 277, in run
self.output(self.process_chunks(input_chunks))
File "biggus/__init__.py", line 305, in process_chunks
return self.streams_handler.process_chunks(chunks)
File "biggus/__init__.py", line 1341, in process_chunks
result = self.finalise()
File "biggus/__init__.py", line 1501, in finalise
chunk = super(_VarMaskedStreamsHandler, self).finalise()
File "biggus/__init__.py", line 1459, in finalise
array.shape = self.current_shape
ValueError: total size of new array must be unchanged
This is still a live issue on master. I've now got somebody who has the problem (not just theoretical anymore) so I should get a good chance to look at this.
A very specialised bug with the variance calculation when producing masked arrays (smaller arrays, and accessing ndarray work fine):