ome / ngff

Next-generation file format (NGFF) specifications for storing bioimaging data in the cloud.
https://ngff.openmicroscopy.org
Other
110 stars 38 forks source link

remove axis restrictions #235

Open d-v-b opened 2 months ago

d-v-b commented 2 months ago

Axes can be N-dimensional, of any type, in any order.

github-actions[bot] commented 2 months ago

Automated Review URLs

imagesc-bot commented 2 months ago

This pull request has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/ome-ngff-update-postponing-transforms-previously-v0-5/95617/3

jni commented 2 months ago

Thanks for opening the PR @d-v-b! 🙏

but btw it seems you missed a line further down, line 313 in the current file:

Each "datasets" dictionary MUST have the same number of dimensions and MUST NOT have more than 5 dimensions.

d-v-b commented 2 months ago

Thanks @jni, I think I got all the references to the 2-5D limit. Please let me know if I missed any.

One consequence of this change is that 1D data can now be stored in OME-NGFF. Personally, I think this is great -- 1D data is real data, and people should be able to store it if they have it.

bogovicj commented 2 months ago

This PR should update the schema and examples before being merged. Heads up - I did a lot of work on this front in this PR: https://github.com/ome/ngff/pull/138 (my commits from Dec 2023), take + edit what you can. I can try to help merge things

Edit: after a little more thought, I'm hopeful the schema changes needed here will be small; but certainly some examples that are currently disallowed that we want to allow would be helpful.

d-v-b commented 2 months ago

switching this to a draft while I work on getting the schema documents consistent with the spec. Because manually editing JSON schema documents is tedious and error prone, I am going to generate the schema with some python scripts containing pydantic models. Personally I think these models should be part of this repo, because pydantic is a good tool for modelling JSON schema (much better than doing it manually), but if this is unconvincing I can remove the python files from the final PR.

glyg commented 1 month ago

I am going to generate the schema with some python scripts containing pydantic models.

@d-v-b wouldn't a linkML version of your pydantic classes be of more generic use?

---
id: https://ngff.openmicroscopy.org/latest/schemas/image.schema
name: ngff-image
title: OpenMicroscopy New Generation File Formats Image Schema
description: |-
  TODO
version: 0.1
license: ??

prefixes:
  linkml: https://w3id.org/linkml/
  biolink: https://w3id.org/biolink/
  schema: http://schema.org/
  ome: https://www.openmicroscopy.org/Schemas/Documentation/Generated/OME-2016-06/ome.html#
  ORCID: https://orcid.org/
  wiki: https://en.wikipedia.org/wiki/

classes:
  Axis:
    attributes:
      name:
        required: true
      type:
      unit:

  ScaleTransform:
    attributes:
      type:
        # equals_string: "scale" (set this as a rule?)
      scale:
        range: float
        array:
          maximum_number_dimensions: 1
          dimensions:
            - minimum_caridnality: 1

  TranslationTransform:
    attributes:
      type:
        # equals_string: "translation" (set this as a rule?)
      scale:
        range: float
        array:
          maximum_number_dimensions: 1
          dimensions:
            - minimum_caridnality: 1

...
d-v-b commented 1 month ago

@glyg perhaps it would, but the goal here is just to generate JSON schema documents, so I'm not sure what generic use we need to accommodate?

glyg commented 1 month ago

@d-v-b ­— I surely don't have a broad enough view of the whole project, so I might very well be mistaken

what generic use we need to accommodate?

I was thinking about consumers of the schema or of a zarr.json. linkML seems to me more usable and language agnostic than custom pydantic classes.

For example a third party library wanting to parse the zarr.json could import these schemas to embed them in its own tooling.

d-v-b commented 1 month ago

@glyg in terms of scope, currently this repo contains JSON schema documents that can fetched from github and used for validation. I don't think there's any expectation that software libraries import code artifacts by this repo. That could of course change, but I don't know of efforts in that direction.

I am proposing changes to the spec, and so I need to update the schema documents. Because the current JSON schema documents are manually written, they contain mistakes and are a pain to update after making spec changes.

Since this project is already using python as a dependency, as a quality of life change I am proposing to use pydantic to define data models that serialize to JSON schema, as a way to avoid needing to write the schema documents by hand. I could be wrong, but I suspect writing data models in python and serializing those models to JSON schema will be an easier development experience than writing data models in JSON schema directly. Maybe linkml could also work for this purpose, but I don't know how to use linkml, and I do know how to use pydantic, so for me the choice is simple.

glyg commented 1 month ago

Yes I understand your point, I think a linkml version would bring some value but as you said you are the one doing the work :slightly_smiling_face:

Thank you for taking the time to answer me

d-v-b commented 1 month ago

tests are passing, so i think this is ready for review.

d-v-b commented 1 month ago

I added a section to provide some guidance for partial implementations, i.e. software that does not implement the full spec; namely, the spec now suggests that partial implementations which normalize input data to their supported subset of the spec notify users when this is occurring.

jni commented 1 month ago

I added a section to provide some guidance for partial implementations,

imho this recommendation is orthogonal to the main purpose of this PR, and it should come in a separate PR. I like it, but it's an extra thing and it's hard enough to merge PRs that are small and self-contained.

joshmoore commented 1 month ago

Independently of whether one PR or two, I can certainly see the implementor community wanting clarification in/around RFC-3 about the responsibility placed on them with this restriction dropped.

d-v-b commented 1 month ago

imho this recommendation is orthogonal to the main purpose of this PR, and it should come in a separate PR. I like it, but it's an extra thing and it's hard enough to merge PRs that are small and self-contained.

Because this PR is widening the space of ome-ngff data, it seems reasonable to give at least a suggestion for how implementations should handle this change. We cannot expect that all implementations support N-dimensional data. I think the best we can do is suggest that implementations keep users aware of how their data is being cast / coerced / transformed, when that kind of thing is happening. Thus it's very non-orthogonal to this PR.

jni commented 1 month ago

It's orthogonal in the sense that partial implementations were a thing before this PR.