Xarray uses the optional property .nbytes to indicate the size of wrapped arrays. Currently we don't implement .nbytes on ManifestArrays, so xarray defaults to estimating the size as basically arr.size * arr.dtype.itemsize. I.e. currently it returns what the full size of the dataset would be if you loaded every referenced chunk into memory at once. But does this make sense for an array that can never be loaded into memory?
There is another completely different size to consider - that of the in-memory representation of the references - see discussion in #104. This is a known fixed number, it's not lazy-loading, but it's much smaller than the current nbytes.
This latter number is what's relevant if you're trying to estimate RAM usage whilst manipulating references, so it's possibly related to the __sizeof__ discussion in https://github.com/pydata/xarray/issues/5764.
Xarray uses the optional property
.nbytes
to indicate the size of wrapped arrays. Currently we don't implement.nbytes
onManifestArrays
, so xarray defaults to estimating the size as basicallyarr.size * arr.dtype.itemsize
. I.e. currently it returns what the full size of the dataset would be if you loaded every referenced chunk into memory at once. But does this make sense for an array that can never be loaded into memory?There is another completely different size to consider - that of the in-memory representation of the references - see discussion in #104. This is a known fixed number, it's not lazy-loading, but it's much smaller than the current nbytes.
This latter number is what's relevant if you're trying to estimate RAM usage whilst manipulating references, so it's possibly related to the
__sizeof__
discussion in https://github.com/pydata/xarray/issues/5764.