Open tomcrane opened 7 years ago
(raised in Toronto WG, and on Community call 2017-10-25)
Yes, those represent the basic case.
Similarly when we view multispectral (channel) digital microscopy images. Each channel image has regions of interest that could be annotated. Here is the same image taken with at 3 different wavelengths
If I want to annotate some interesting features in the red wavelength they only make sense to be displayed on the red image no matter where it is laid out on the canvas.
and then again when we create an overlay composite of the three channels
See an interactive example here: https://imoutsatsos.github.io/osdUseCaseHCS/
More generally: I have an annotation on the canvas that only makes sense in the context of another annotation on the canvas.
Specific examples:
This might just be an implementation pattern rather than anything new in the spec in Presentation 3. The W3C model has scopes:
https://www.w3.org/TR/annotation-model/#scope-of-a-resource
and states:
https://www.w3.org/TR/annotation-model/#states
We could have a shared recipe for conveying the scope of one annotation on the canvas in the context of another. Client software uses this information to help the user view the resource as intended.