The idea for packaging MLRD training data is to have a STAC catalog which has a collection of STAC items which represent each label chip. There's an asset in the label STAC item which points to the actual label vector or raster file and then there's two options for linking the source imagery:
Link to each source imagery STAC item from. a STAC API which covers the spatial and temporal extent of the label item
In the properties, define the APIs and collections to query for source imagery items matching the spatial and temporal extent of the label item and have the clients run the query and dynamically fetch the STAC items
I'm of the opinion that the second option would provide more compact catalogs and require less effort for STAC catalog generation
The idea for packaging MLRD training data is to have a STAC catalog which has a collection of STAC items which represent each label chip. There's an asset in the label STAC item which points to the actual label vector or raster file and then there's two options for linking the source imagery:
I'm of the opinion that the second option would provide more compact catalogs and require less effort for STAC catalog generation