Open gerritholl opened 7 years ago
I answered your question on StackOverflow.
I agree that this is unfortunate. The cleanest solution would be an integer dtype with missing value support in NumPy itself, but that isn't going to happen anytime soon.
I'm not entirely opposed to the idea of adding (limited) support for masked arrays in xarray (see also https://github.com/pydata/xarray/pull/1118), but this could be a lot of work for relatively limited return.
I definitely recommend trying dask for processing multi-gigabyte arrays. You might even find the performance boost compelling enough that you could forgive the limitation that it doesn't handle masked arrays, either.
I don't see how an integer dtype could ever support missing values; float missing values are specifically defined by IEEE 754 but for ints, every sequence of bits corresponds to a valid value. OTOH, NetCDF does have a _FillValue attribute that works for any type including int. If we view xarray as "NetCDF in memory" that could be an approach to follow, but for numpy in general it would fairly heavily break existing code (see also http://www.numpy.org/NA-overview.html) in particular for 8-bit types. If i understand correctly, R uses INT_MAX which would be 127 for 'int8… Apparently, R ints are always 32 bits. I'm new to xarray so I don't have a good idea on how much work adding support for masked arrays would be, but I'll take your word that it's not straightforward.
In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here; otherwise it will be marked as closed automatically
I think this issue should remain open. I think it would still be highly desirable to implement support for true masked arrays, such that any value can be masked without throwing away the original value.
@gerritholl check out https://pandas-docs.github.io/pandas-docs-travis/whatsnew/v0.24.0.html#whatsnew-0240-enhancements-intna
I think that's the closest way of having int support; from my understanding supporting masked arrays directly would be a decent lift
@max-sixty Interesting! I wonder what it would take to make use of this "nullable integer data type" in xarray. It wouldn't work to convert it to a standard numpy array (da.values
) retaining the dtype, but one could make a new .to_maskedarray()
method returning a numpy masked array; that would probably be easier than to add full support for masked arrays.
Pandas 1.0 uses pd.NA for integers, boolean, and string dtypes: https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.0.0.html#experimental-na-scalar-to-denote-missing-values
Currently I keep carrying a "float
.
Also, NaN
does not necessarily mean NA
which already caused me quite some head-scratching in the past. Further, it would be a very cool indicator to see which values of a dense array should be converted into a sparse array.
I agree, I have this same issue with large genotyping data arrays often containing tiny integers and some degree of missingness in nearly 100% of raw datasets. Are there recommended workarounds now? I am thinking of constantly using Datasets instead of DataArrays with mask arrays to accompany every data array, but I'm not sure if that's the best interim solution.
I've recently come across marray, which is still very experimental (and still needs a hack to really work) but allows us to wrap masked arrays:
In [1]: import marray
...: import numpy as np
...: import xarray as xr
...:
...: # create a nested namespace for masked arrays wrapping numpy
...: xp = marray.masked_array(np)
...: data = xp.arange(10)
...: data.mask[:] = data.data % 2 == 0
...: # hack: set `__array_namespace__` to the nested namespace we just created
...: data.__array_namespace__ = lambda self, **kwargs: xp
...:
...: arr = xr.DataArray(data, dims="x")
...: arr
Out[1]:
<xarray.DataArray (x: 10)> Size: 80B
masked_array(data=[--, 1, --, 3, --, 5, --, 7, --, 9],
mask=[ True, False, True, False, True, False, True, False,
True, False],
fill_value=999999)
Dimensions without coordinates: x
(there's a lot of other things that does not work, for example indexing / isel
)
Also, @shoyer, this another instance of the nested array namespace I was talking about in the last meeting.
A great beauty of numpys masked arrays is that it works with any dtype, since it does not use
nan
. Unfortunately, when I try to put my data into anxarray.Dataset
, it converts ints to float, as shown below:This happens in the function
_maybe_promote
.Such type “promotion” is unaffordable for me; the memory consumption of my multi-gigabyte arrays would explode by a factor 4. Secondly, many of my integer-dtype fields are bit arrays, for which floating point representation is not desirable.
It would greatly benefit
xarray
if it could use masking while preserving the dtype of input data.(See also: Stackoverflow question)