scverse / spatialdata

An open and interoperable data framework for spatial omics data
https://spatialdata.scverse.org/
BSD 3-Clause "New" or "Revised" License
174 stars 34 forks source link

reimplemented incremental io #501

Open LucaMarconato opened 1 month ago

LucaMarconato commented 1 month ago

Closes #186 Closes #496 Closes #498

Support for incremental io operations.

New features:

Robustness:

Testing:

Other:

This PR also sets the basis for (not implemented here) the ability to load in-memory objects that are Dask-backed.

codecov[bot] commented 1 month ago

Codecov Report

Attention: Patch coverage is 85.90078% with 54 lines in your changes are missing coverage. Please review.

Project coverage is 92.15%. Comparing base (b73f3a8) to head (582622f).

:exclamation: Current head 582622f differs from pull request most recent head f2bea77. Consider uploading reports for the commit f2bea77 to get more accurate results

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #501 +/- ## ========================================== - Coverage 92.53% 92.15% -0.38% ========================================== Files 43 43 Lines 6003 6211 +208 ========================================== + Hits 5555 5724 +169 - Misses 448 487 +39 ``` | [Files](https://app.codecov.io/gh/scverse/spatialdata/pull/501?dropdown=coverage&src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse) | Coverage Δ | | |---|---|---| | [src/spatialdata/\_core/\_deepcopy.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2F_core%2F_deepcopy.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL19jb3JlL19kZWVwY29weS5weQ==) | `98.41% <100.00%> (+0.02%)` | :arrow_up: | | [src/spatialdata/\_core/\_elements.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2F_core%2F_elements.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL19jb3JlL19lbGVtZW50cy5weQ==) | `92.47% <100.00%> (+0.80%)` | :arrow_up: | | [src/spatialdata/\_io/format.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2F_io%2Fformat.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL19pby9mb3JtYXQucHk=) | `87.38% <100.00%> (ø)` | | | [src/spatialdata/\_io/io\_zarr.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2F_io%2Fio_zarr.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL19pby9pb196YXJyLnB5) | `88.37% <100.00%> (ø)` | | | [src/spatialdata/dataloader/datasets.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2Fdataloader%2Fdatasets.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL2RhdGFsb2FkZXIvZGF0YXNldHMucHk=) | `90.68% <ø> (ø)` | | | [src/spatialdata/models/\_\_init\_\_.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2Fmodels%2F__init__.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL21vZGVscy9fX2luaXRfXy5weQ==) | `100.00% <ø> (ø)` | | | [src/spatialdata/models/models.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2Fmodels%2Fmodels.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL21vZGVscy9tb2RlbHMucHk=) | `88.30% <100.00%> (+0.31%)` | :arrow_up: | | [src/spatialdata/testing.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2Ftesting.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL3Rlc3RpbmcucHk=) | `98.24% <87.50%> (-1.76%)` | :arrow_down: | | [src/spatialdata/models/\_utils.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2Fmodels%2F_utils.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL21vZGVscy9fdXRpbHMucHk=) | `91.30% <87.50%> (-0.30%)` | :arrow_down: | | [src/spatialdata/\_io/\_utils.py](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree&filepath=src%2Fspatialdata%2F_io%2F_utils.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse#diff-c3JjL3NwYXRpYWxkYXRhL19pby9fdXRpbHMucHk=) | `88.52% <75.00%> (-2.33%)` | :arrow_down: | | ... and [2 more](https://app.codecov.io/gh/scverse/spatialdata/pull/501?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse) | | ... and [1 file with indirect coverage changes](https://app.codecov.io/gh/scverse/spatialdata/pull/501/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=scverse)
LucaMarconato commented 1 month ago

@ArneDefauw @aeisenbarth tagging you because you opened at some point a issue regarding incremental IO. In this PR incremental IO is implemented, happy to receive feedback in case you want to play around with it😊

I will make a notebook to showcase it, but in short to save an element lables, table, etc you can use the new sdata.write_element('element_name'). If the element already exists in the storage an exception will be raised. You can work around the exception for instance with the strategies shown here: https://github.com/scverse/spatialdata/blob/87dd1a88b41139d2657b56a5191a14e239aa26a2/tests/io/test_readwrite.py#L159

Please note that those strategies are not guarantee to work in various scenarios, including multi-threaded application, network storages, etc. So please use with care.

LucaMarconato commented 1 month ago

Currently the whole table needs to be replaced and the whole table needs to be stored in-memory, but recent progress in anndata + dask will be used also in spatialdata to allow lazy loading and the replacement of particular elements (like adding a single obs column). This PR clean up the previous code and is a step in that direction.

melonora commented 1 month ago

Thanks @LucaMarconato ! I left a review with some minor points below. I think it looks good, but i didn't have time for a super in depth review. Given that this is a big change, I think the approval should be given by somebody who can look more closely.

Thanks for the review @kevinyamauchi. I will have a look at this PR later today as well.

ArneDefauw commented 1 month ago

@ArneDefauw @aeisenbarth tagging you because you opened at some point a issue regarding incremental IO. In this PR incremental IO is implemented, happy to receive feedback in case you want to play around with it😊

I will make a notebook to showcase it, but in short to save an element lables, table, etc you can use the new sdata.write_element('element_name'). If the element already exists in the storage an exception will be raised. You can work around the exception for instance with the strategies shown here:

https://github.com/scverse/spatialdata/blob/87dd1a88b41139d2657b56a5191a14e239aa26a2/tests/io/test_readwrite.py#L159

Please note that those strategies are not guarantee to work in various scenarios, including multi-threaded application, network storages, etc. So please use with care.

Thanks for the quick response and fix!

I've tested the incremental io for my use case, and up to now everything seems to works as expected, except one thing. If I follow the approach suggested here:

https://github.com/scverse/spatialdata/blob/87dd1a88b41139d2657b56a5191a14e239aa26a2/tests/io/test_readwrite.py#L159

I get a ValueError when I load my SpatialData object back from the zarr store and try to overwrite it: ValueError: The file path specified is a parent directory of one or more files used for backing for one or more elements in the SpatialData object. Deleting the data would corrupt the SpatialData object.

The fix was to first delete the attribute from the SpatialData object, and then remove the element on disk. Minimal example below of a typical workflow in my image processing pipelines:

from spatialdata import SpatialData
from spatialdata import read_zarr
import spatialdata
import dask.array as da

img_layer="test_image"

sdata=SpatialData()

sdata.write( os.path.join( path, "sdata.zarr" ) )

dummy_array = da.random.random(size=(1,10000, 10000), chunks=(1,1000, 1000))

se=spatialdata.models.Image2DModel.parse(
                data=dummy_array,
            )

sdata.images[ img_layer ] = se

if sdata.is_backed():
    sdata.write_element("test_image", overwrite=True)

# need to read back from zarr store, otherwise graph in the in-memory sdata would not be executed
sdata=read_zarr( sdata.path )

# now overwrite:

# Here I needed to first delete the attribute:

# first delete attribute
element_type = sdata._element_type_from_element_name(  img_layer )
del getattr(sdata, element_type)[ img_layer ]
# then on disk
if sdata.is_backed():
    sdata.delete_element_from_disk( img_layer )

sdata.images[ img_layer ] = se

if sdata.is_backed():
    sdata.write_element(img_layer, overwrite=True)

sdata=read_zarr( sdata.path )

I think what the unit test you refered to lacks is the reading back from the zarr store, after an element is written to a zarr store.

In version 0.0.15 of SpatialData, when sdata.add_image(...) was executed, it was not necessary to read back from the zarr store. I understand that current implementation allows for more control, but the inplace update of the SpatialData object was kinda convenient.

Edit: I added a pull request, to illustrate the issue a little bit more: https://github.com/scverse/spatialdata/pull/515

LucaMarconato commented 1 month ago

Thank you @ArneDefauw for trying the code and for the explanation, I will now look into your PR.

In version 0.0.15 of SpatialData, when sdata.add_image(...) was executed, it was not necessary to read back from the zarr store. I understand that current implementation allows for more control, but the inplace update of the SpatialData object was kinda convenient.

The reason why we refactored this part is because with add_image(), if the user had a in-memory image and was writing it to disk, the image would have then immediately lazy loaded. This is good and ergonomic if the image needs to be written only once, but if the user tries to write image again (for instance in a notebook when a cell may get manually executed twice), then it would have lead to an error.

LucaMarconato commented 1 month ago

Thanks for the reviews. I addressed the points from @kevinyamauchi and from @ArneDefauw (in particular I merged his PR here). @giovp, when you have time could you also please give a pass to this?

ArneDefauw commented 1 month ago

Thank you @ArneDefauw for trying the code and for the explanation, I will now look into your PR.

In version 0.0.15 of SpatialData, when sdata.add_image(...) was executed, it was not necessary to read back from the zarr store. I understand that current implementation allows for more control, but the inplace update of the SpatialData object was kinda convenient.

The reason why we refactored this part is because with add_image(), if the user had a in-memory image and was writing it to disk, the image would have then immediately lazy loaded. This is good and ergonomic if the image needs to be written only once, but if the user tries to write image again (for instance in a notebook when a cell may get manually executed twice), then it would have lead to an error.

Hi @LucaMarconato , thanks for the reply and the fixes! I've tested your suggestions (https://github.com/scverse/spatialdata/blob/582622f689a7e05421e9d066f98baf702549978f/tests/io/test_readwrite.py#L136) for my use cases and everything seems to work fine (both for images,labels, points, labels and shapes ).

Workaround 1 , looks rather safe in most scenarios. If I understand correctly it covers following scenario: Having "x" in sdata, doing something on "x" (i.e. defining a dask graph), and then writing to "x".

How I would usually work is: Having "x" in sdata, doing something on "x", and writing to "y" (where "y" already exists).

The latter feels less dangerous, and looks pretty standard in image processing pipelines, e.g. tuning of hyper parameters for image cleaning or segmentation.

In the latter case, I guess the following would be sufficient:


arr=sdata["x"].data
arr=arr*2
spatial_element=spatialdata.models.Image2DModel.parse(
                arr,
            )
del sdata["y"]
sdata.delete_element_from_disk("y")
sdata["y"]=spatial_element
sdata.write_element("y")
sdata=read_zarr( sdata.path )
LucaMarconato commented 1 month ago

Yes, I agree that the approach that you described is generally a good practice when processing data and safe, since the original data is not modified.

The use cases that I described are instead for the case in which the data itself is replaced. I think I should add in the comments that this approach should be avoided when possible, and clarify that the workaround that I described are only if really needed.

namsaraeva commented 1 month ago

Thank you for this PR, I am using it right now. One question: would it be possible to pass a list of strings onto write_element() instead of just one element? @LucaMarconato

LucaMarconato commented 1 month ago

@namsaraeva thanks for the suggestion, it's indeed handier to have list of names. I have added the support for this for write_element() and delete_element_from_disk().