deshaw / versioned-hdf5

Versioned HDF5 provides a versioned abstraction on top of h5py
https://deshaw.github.io/versioned-hdf5/
Other
76 stars 20 forks source link

Implement as_subchunk_map in Cython #364

Closed ArvidJB closed 1 month ago

ArvidJB commented 1 month ago

By reimplementing as_subchunk_map in Cython we can significantly improve write performance.

We spend significant time in as_subchunk_map both when resizing and when reading the existing data from the file:

In [2]: with h5py.File(path, 'w') as f:
   ...:     vf = VersionedHDF5File(f)
   ...:     with vf.stage_version('r0') as sv:
   ...:         sv.create_dataset('value', data=np.arange(1_000_000), chunks=(1000,), maxshape=(None,))
   ...:
[PYFLYBY] from versioned_hdf5 import VersionedHDF5File
[PYFLYBY] import h5py
[PYFLYBY] import numpy as np

In [3]: %load_ext pyinstrument

In [4]: %%pyinstrument
   ...: for i in range(1, 11):
   ...:     with h5py.File(path, 'r+') as f:
   ...:         vf = VersionedHDF5File(f)
   ...:         with vf.stage_version(f'r{i}') as sv:
   ...:             sv['value'].resize((1_000_000 + i,))
   ...:             a = np.empty(1_000_000 + i, dtype='int64')
   ...:             a[:1_000_000 + i - 1] = sv['value'][:1_000_000 + i - 1]
   ...:             sv['value'] = a
   ...:

  _     ._   __/__   _ _  _  _ _/_   Recorded: 14:11:51  Samples:  2785
 /_//_/// /_\ / //_// / //_'/ //     Duration: 3.228     CPU time: 3.203
/   _/                      v4.6.1

Program: /usr/local/python/python-3.11/std/bin/py

3.228 <module>  <ipython-input-4-eb8e7bd4261f>:1
|- 1.239 _GeneratorContextManager.__exit__  contextlib.py:141
|  `- 1.239 VersionedHDF5File.stage_version  versioned_hdf5/api.py:267
|     `- 1.239 commit_version  versioned_hdf5/versions.py:71
|        |- 0.678 create_virtual_dataset  versioned_hdf5/backend.py:436
|        |  |- 0.215 [self]  versioned_hdf5/backend.py
|        |  |- 0.172 select  h5py/_hl/selections.py:19
|        |  |  |- 0.109 [self]  h5py/_hl/selections.py
|        |  |  `- 0.047 SimpleSelection.__init__  h5py/_hl/selections.py:227
|        |  |     `- 0.037 SimpleSelection.__init__  h5py/_hl/selections.py:112
|        |  |- 0.090 Group.create_virtual_dataset  h5py/_hl/group.py:188
|        |  |  `- 0.067 VirtualLayout.make_dataset  h5py/_hl/vds.py:228
|        |  |- 0.078 Dataset.shape  h5py/_hl/dataset.py:467
|        |  |  `- 0.075 [self]  h5py/_hl/dataset.py
|        |  `- 0.038 <dictcomp>  versioned_hdf5/backend.py:444
|        `- 0.518 write_dataset  versioned_hdf5/backend.py:119
|           |- 0.281 Hashtable.hash  versioned_hdf5/hashtable.py:118
|           |  `- 0.239 openssl_sha256  <built-in>
|           |- 0.105 [self]  versioned_hdf5/backend.py
|           `- 0.050 Hashtable.__contains__  <frozen _collections_abc>:778
|              `- 0.042 Hashtable.__getitem__  versioned_hdf5/hashtable.py:206
|- 0.937 DatasetWrapper.__getitem__  versioned_hdf5/wrappers.py:1402
|  `- 0.937 InMemoryDataset.__getitem__  versioned_hdf5/wrappers.py:898
|     `- 0.937 InMemoryDataset.get_index  versioned_hdf5/wrappers.py:798
|        |- 0.386 as_subchunk_map  versioned_hdf5/wrappers.py:500
|        |  |- 0.273 <listcomp>  versioned_hdf5/wrappers.py:609
|        |  |  |- 0.219 [self]  versioned_hdf5/wrappers.py
|        |  |  `- 0.049 where  <__array_function__ internals>:177
|        |  |        [2 frames hidden]  <__array_function__ internals>, <buil...
|        |  `- 0.080 [self]  versioned_hdf5/wrappers.py
|        |- 0.329 InMemoryDatasetID._read_chunk  versioned_hdf5/wrappers.py:1490
|        |  `- 0.318 Dataset.__getitem__  h5py/_hl/dataset.py:749
|        |     |- 0.161 Reader.read  <built-in>
|        |     |- 0.115 Dataset._fast_reader  h5py/_hl/dataset.py:527
|        |     `- 0.038 [self]  h5py/_hl/dataset.py
|        `- 0.196 [self]  versioned_hdf5/wrappers.py
|- 0.771 InMemoryDataset.resize  versioned_hdf5/wrappers.py:749
|  |- 0.392 as_subchunk_map  versioned_hdf5/wrappers.py:500
|  |  |- 0.315 <listcomp>  versioned_hdf5/wrappers.py:609
|  |  |  |- 0.257 [self]  versioned_hdf5/wrappers.py
|  |  |  `- 0.055 where  <__array_function__ internals>:177
|  |  |        [2 frames hidden]  <__array_function__ internals>, <buil...
|  |  `- 0.052 [self]  versioned_hdf5/wrappers.py
|  |- 0.170 InMemoryDatasetID.data_dict  versioned_hdf5/wrappers.py:1433
|  |  `- 0.160 [self]  versioned_hdf5/wrappers.py
|  |- 0.115 InMemoryDataset.get_index  versioned_hdf5/wrappers.py:798
|  |  `- 0.114 InMemoryDataset.__getitem__  h5py/_hl/dataset.py:749
|  |     `- 0.114 Reader.read  <built-in>
|  `- 0.091 [self]  versioned_hdf5/wrappers.py
|- 0.178 _GeneratorContextManager.__enter__  contextlib.py:132
|  `- 0.178 VersionedHDF5File.stage_version  versioned_hdf5/api.py:267
|     `- 0.177 create_version_group  versioned_hdf5/versions.py:22
|        `- 0.174 Group.visititems  h5py/_hl/group.py:635
|           |- 0.132 proxy  h5py/_hl/group.py:660
|           |  |- 0.075 _get  versioned_hdf5/versions.py:57
|           |  |  `- 0.055 InMemoryGroup.__setitem__  versioned_hdf5/wrappers.py:115
|           |  |     `- 0.055 InMemoryGroup._add_to_data  versioned_hdf5/wrappers.py:119
|           |  |        `- 0.055 InMemoryDataset.__init__  versioned_hdf5/wrappers.py:643
|           |  `- 0.047 Group.__getitem__  h5py/_hl/group.py:348
|           `- 0.042 [self]  h5py/_hl/group.py
`- 0.051 InMemoryGroup.__setitem__  versioned_hdf5/wrappers.py:115
   `- 0.051 InMemoryGroup._add_to_data  versioned_hdf5/wrappers.py:119

By moving the as_subchunk_map implementation to Cython these calls are siginificantly faster (although the rest of the code of course takes the same amount of time as before):

In [4]: %%pyinstrument
   ...: for i in range(1, 11):
   ...:     with h5py.File(path, 'r+') as f:
   ...:         vf = VersionedHDF5File(f)
   ...:         with vf.stage_version(f'r{i}') as sv:
   ...:             sv['value'].resize((1_000_000 + i,))
   ...:             a = np.empty(1_000_000 + i, dtype='int64')
   ...:             a[:1_000_000 + i - 1] = sv['value'][:1_000_000 + i - 1]
   ...:             sv['value'] = a
   ...:

  _     ._   __/__   _ _  _  _ _/_   Recorded: 14:15:18  Samples:  2431
 /_//_/// /_\ / //_// / //_'/ //     Duration: 2.909     CPU time: 2.892
/   _/                      v4.6.1

Program: /usr/local/python/python-3.11/std/bin/py

2.909 <module>  <ipython-input-4-eb8e7bd4261f>:1
|- 1.260 _GeneratorContextManager.__exit__  contextlib.py:141
|  `- 1.260 VersionedHDF5File.stage_version  versioned_hdf5/api.py:267
|     `- 1.260 commit_version  versioned_hdf5/versions.py:71
|        |- 0.678 create_virtual_dataset  versioned_hdf5/backend.py:436
|        |  |- 0.224 [self]  versioned_hdf5/backend.py
|        |  |- 0.164 select  h5py/_hl/selections.py:19
|        |  |  |- 0.099 [self]  h5py/_hl/selections.py
|        |  |  `- 0.042 SimpleSelection.__init__  h5py/_hl/selections.py:227
|        |  |     `- 0.037 SimpleSelection.__init__  h5py/_hl/selections.py:112
|        |  |- 0.091 Group.create_virtual_dataset  h5py/_hl/group.py:188
|        |  |  `- 0.070 VirtualLayout.make_dataset  h5py/_hl/vds.py:228
|        |  |     `- 0.069 [self]  h5py/_hl/vds.py
|        |  |- 0.064 Dataset.shape  h5py/_hl/dataset.py:467
|        |  |  `- 0.062 [self]  h5py/_hl/dataset.py
|        |  |- 0.035 <dictcomp>  versioned_hdf5/backend.py:444
|        |  `- 0.032 Dataset.name  h5py/_hl/base.py:287
|        `- 0.538 write_dataset  versioned_hdf5/backend.py:119
|           |- 0.284 Hashtable.hash  versioned_hdf5/hashtable.py:118
|           |  `- 0.233 openssl_sha256  <built-in>
|           `- 0.145 [self]  versioned_hdf5/backend.py
|- 0.773 DatasetWrapper.__getitem__  versioned_hdf5/wrappers.py:1266
|  `- 0.773 InMemoryDataset.__getitem__  versioned_hdf5/wrappers.py:762
|     `- 0.772 InMemoryDataset.get_index  versioned_hdf5/wrappers.py:662
|        |- 0.381 [self]  versioned_hdf5/wrappers.py
|        `- 0.328 InMemoryDatasetID._read_chunk  versioned_hdf5/wrappers.py:1354
|           `- 0.321 Dataset.__getitem__  h5py/_hl/dataset.py:749
|              |- 0.153 Reader.read  <built-in>
|              |- 0.124 Dataset._fast_reader  h5py/_hl/dataset.py:527
|              |  `- 0.123 [self]  h5py/_hl/dataset.py
|              `- 0.038 [self]  h5py/_hl/dataset.py
|- 0.550 InMemoryDataset.resize  versioned_hdf5/wrappers.py:613
|  |- 0.199 [self]  versioned_hdf5/wrappers.py
|  |- 0.166 InMemoryDatasetID.data_dict  versioned_hdf5/wrappers.py:1297
|  |  `- 0.161 [self]  versioned_hdf5/wrappers.py
|  |- 0.115 InMemoryDataset.get_index  versioned_hdf5/wrappers.py:662
|  |  `- 0.112 InMemoryDataset.__getitem__  h5py/_hl/dataset.py:749
|  |     `- 0.112 Reader.read  <built-in>
|  `- 0.057 Sequence.__instancecheck__  <frozen abc>:117
|        [2 frames hidden]  <frozen abc>
|- 0.221 _GeneratorContextManager.__enter__  contextlib.py:132
|  `- 0.221 VersionedHDF5File.stage_version  versioned_hdf5/api.py:267
|     `- 0.220 create_version_group  versioned_hdf5/versions.py:22
|        `- 0.218 Group.visititems  h5py/_hl/group.py:635
|           |- 0.173 proxy  h5py/_hl/group.py:660
|           |  |- 0.116 _get  versioned_hdf5/versions.py:57
|           |  |  `- 0.096 InMemoryGroup.__setitem__  versioned_hdf5/wrappers.py:114
|           |  |     `- 0.096 InMemoryGroup._add_to_data  versioned_hdf5/wrappers.py:118
|           |  |        `- 0.096 InMemoryDataset.__init__  versioned_hdf5/wrappers.py:507
|           |  |           `- 0.056 InMemoryDatasetID.__init__  versioned_hdf5/wrappers.py:1271
|           |  `- 0.048 Group.__getitem__  h5py/_hl/group.py:348
|           |     `- 0.032 [self]  h5py/_hl/group.py
|           `- 0.044 [self]  h5py/_hl/group.py
`- 0.049 InMemoryGroup.__setitem__  versioned_hdf5/wrappers.py:114
   `- 0.049 InMemoryGroup._add_to_data  versioned_hdf5/wrappers.py:118

Note that we still spend significant time constructing ndindex objects in the Cython version of as_subchunk_map which accounts for the majority of the time in that function.

crusaderky commented 1 month ago

I ran some benchmarks:


from versioned_hdf5.wrappers import as_subchunk_map
from ndindex import ChunkSize

%%timeit
# 1D: runtime is dominated by creation of chunk_subindexes
list(as_subchunk_map(
    ChunkSize((10, 100_000)),  # 8 MB
    (slice(None), slice(None)),
    (100_000, 100_000),  # 80 GB
))
# master : 209 ms
# this PR: 41 ms

%%timeit
# 2D: runtime is dominated by final product() for loop
list(as_subchunk_map(
    ChunkSize((1000, 1000)),  # 8 MB
    (slice(None), slice(None)),
    (100_000, 100_000),  # 80 GB
))
# master : 34 ms
# this PR: 31 ms
crusaderky commented 1 month ago

@peytondmurray this is now ready for review and merge

peytondmurray commented 1 month ago

Taking a look now!