Describe the bug
ndvi is giving incorrect results when input arrays are unsigned integer due to overflow in subtraction.
Expected behavior
Consider the following
import numpy as np
import xarray as xr
import xrspatial
a = xr.DataArray(np.array([[1,1,1],[1,1,1]], dtype='uint16'))
b = xr.DataArray(np.array([[0,1,2],[0,1,2]], dtype='uint16'))
xrspatial.ndvi(a,b)
Values in third column are due to overflow of unsigned integer during subtraction, see https://github.com/numpy/numpy/issues/21237.
Using the above data, this can be illustrated by
A solution could be to call np.subtract directly in e.g. _normalized_ratio_cpu, though that may not be robust to all cases. I think there are some alternatives in the numpy issues linked above
Describe the bug ndvi is giving incorrect results when input arrays are unsigned integer due to overflow in subtraction.
Expected behavior Consider the following
Results:
Values in third column are due to overflow of unsigned integer during subtraction, see https://github.com/numpy/numpy/issues/21237. Using the above data, this can be illustrated by
A solution could be to call np.subtract directly in e.g.
_normalized_ratio_cpu
, though that may not be robust to all cases. I think there are some alternatives in the numpy issues linked above