Closed greg9q closed 1 year ago
Thanks for working on this. I guess this fixes #67? Would it be easy to write a unit test or example script that reproduces the issue?
Yes, this addresses exactly the issue described in #67. The following quick test works in 0.10.5, fails in 0.11.3, and works with the proposed patch:
import tempfile
from pathlib import Path
import numpy as np
import numpy.testing
from pyhdf.SD import SDC, SD
def test_pyhdf_sd_read_char():
with tempfile.TemporaryDirectory() as temp_dir:
hdf_file = str(Path(temp_dir) / "test.hdf")
sd = SD(hdf_file, SDC.WRITE | SDC.CREATE)
sds = sd.create("test_sds", SDC.CHAR, [5])
sds[:] = "ABCDE"
np.testing.assert_equal(sds[:], np.array(list("ABCDE"), "S1"))
if __name__ == "__main__":
test_pyhdf_sd_read_char()
Cool, can you add that as a unit test?
Yes, done - added as a unit test.
This proposed fix addresses an issue encountered after a recent upgrade from pyhdf 0.10.5 to 0.11.3. When reading an SDS of type
SDC.CHAR
with version 0.11.3, I'd get the following error:I tracked the problem down to use of
PyArray_SimpleNew
with typenumNPY_STRING
, which fails because that type code requires a size specification. This change instead usesPyArray_New
with anitemsize
of 1, which reproduces the 0.10.5SDC.CHAR
read behavior. All the non-NPY_STRING
type codes are fixed-size in which casePyArray_New
ignores theitemsize
argument so behavior for these type codes should be unaffected.