Closed geohacker closed 8 months ago
Noting that there are already some mockups and design ideas and questions in https://github.com/developmentseed/pearl-frontend-internal/issues/491.
Ex:
Some followup questions here:
- User should be able to select the imagery they want to use
Will the application permit a user to change the imagery source within a single project? AKA, can I run inference on an AOI with Sentinel imagery, and then (if imagery available) run inference on the same AOI but with NAIP imagery?
- For Sentinel, user should be able to select a timeframe. With a start and end date
What is displayed on the interface? Does sentinel provide a single day's scene for the entire timeframe specified, or are there mosaics of multiple scenes over multiple capture days/times within that timeframe? Is there a limit/standard range of dates?
Thank you @LanesGood!
Will the application permit a user to change the imagery source within a single project? AKA, can I run inference on an AOI with Sentinel imagery, and then (if imagery available) run inference on the same AOI but with NAIP imagery?
I think we may not support this. At the moment we set the imagery source when the project is created with the assumption that we may not actually change the source later on. It might be best to not change this assumption. The other consideration is that changing imagery will most likely need the user to change the model which may complicate the workflow. cc @ingalls
What is displayed on the interface? Does sentinel provide a single day's scene for the entire timeframe specified, or are there mosaics of multiple scenes over multiple capture days/times within that timeframe? Is there a limit/standard range of dates?
We are limited (in a good way) by the abilities of the Planetary Computer. The PC explorer does a good job of showing what the available options are https://planetarycomputer.microsoft.com/explore?c=-100.0038%2C21.2617&z=7.50&v=2&d=sentinel-2-l2a&m=cql%3A3168344d75a04a90d6a4c6204e033db5&r=Natural+color&s=false%3A%3A100%3A%3Atrue&sr=desc&ae=0
The most important one for us are acquired dates and cloud cover. We might want to limit the range to select quarterly and annual because daily imagery I don't think will be useful with the models we are building. cc @srmsoumya @vincentsarago
Thanks @geohacker
One thing I'm still a bit confused by for the frontend date selection: what is the date range of a sentinel mosaic, and how is the data composed? For example, if a user sets 1/10/2020 - 1/10/2021
as their range:
@LanesGood I'll try to expand on my thinking so far:
Does this make sense?
To answer your questions above:
Is each tile of the mosaic presented only the most recent image?
A mosaic is a bunch of GeoTIFFs stitched together. For example, if you check this planetary computer explore link, it pulls Sentinel 2 imagery in a region in Mexico. The list shows all individual scenes. We can use the PC API to generate a mosaic URL from this, based on a search ID. Which will look like for example https://planetarycomputer.microsoft.com/api/data/v1/mosaic/tiles/82ebdc445544365e45be4db6d22536ec/{z}/{x}/{y}?assets=B04&assets=B03&assets=B02&color_formula=Gamma+RGB+3.2+Saturation+0.8+Sigmoidal+RGB+25+0.35&collection=sentinel-2-l2a. Internally, the API grabs the individual geotiffs, stitches them together as a mosaic, and then tiles them. So each tile will be in the time frame but there is no guarantee that two adjacent tiles are from the same day/time. There's a way to set priority I think which we'll figure out.
Is it clear to the user what timespan a tile may represent? Is it possible and/or useful to indicate this?
May be, I'm not sure. But since we work with 3 months at a time, I think this would be ok. It will be very complex for us to do this. So for the first pass, I think we can not worry.
Is it possible to indicate which sentinel platform the tile is from?
We'll only use Sentinel 2
Is this year-long date range of a mosaic even possible? Or is there a limit to the span of a mosaic?
Multiyear mosaics are possible (NAIP for example). But for the current use-case we'll stick to 3 months as the change analysis is quarterly.
@ingalls @vgeorge @srmsoumya @vincentsarago please add questions and thoughts you all have. Thank you!
Thanks again for the comments @geohacker - as mentioned on slack I think this is a good basis for documentation.
The workflow proposed during today's stand up is as follows:
As discussed:
There are likely more implications for timespans in the share/export/project page flow still to come
I've created a Figma prototype for this flow.
Some questions regarding these wireframes and the flow:
The main changes on this include the selection icons/layout for Imagery and Model, given that we won't permit those to be changed once the model has been run for the first itme.
Some remaining decisions to make include:
Other questions to enhance the workflow:
Considering only the user experience perspective I believe it makes more sense to allow drawing an AOI before selecting a mosaic/model.
In most use cases the user should already have a target area that they want to run predictions. Making the user to browse through mosaics and models to find one that includes the area seems more involved than inferring the suitable mosaics/models from an AOI.
But we might need to change the endpoint GET /mosaic to include the bounds in the response, otherwise, the client will need to do a request to GET /mosaic/:mosaic for each mosaic bounds.
The endpoint GET /model already returns the bounds for each model, but it is paginated. We might need to consider a bounds filter for both endpoints that return the elements that cover the area or to have a parameter that disables pagination.
cc @geohacker
@LanesGood not sure if you already have a decision, but I believe it makes sense to keep the fixed parts separated. I also agree to list AOIs with different timestamps on separate lines, but I don't have a strong opinion on this.
But we might need to change the endpoint GET /mosaic to include the bounds in the response
Yea we can do that
@vgeorge while implementing this, I'd approach is from an MVP standpoint. So if we are going to hardcode somethings specific to sentinel I think that's ok on the frontend. For example, the bbox. This is both to avoid making too many changes on the API that we might want to change after testing the approach and to save time on implementation.
@geohacker and I had a chat today to revisit this, here are my notes:
Please let me know if you have thoughts or questions.
@vgeorge can you provide feedback on the below screens? We discussed that mosaic selection and creation would take place in a modal. To permit both actions, I've added tabs to the modal for "Preset Mosaic" vs "Create New Mosaic."
Preset Mosaic
Create New Mosaic
@LanesGood this looks great to me. I think the preview of current viewport on the presets might be tricky. We may want to just use the thumbnail Planetary Computer returns as default.
For reference, this is how we create new mosaics manually https://github.com/developmentseed/pearl-backend/blob/project-ts/helpers/create_sentinel_mosaic.py#L10-L21. The cloud cover is by default set to 50% in the request. cc @vgeorge
@LanesGood this workflow looks really good. Regarding the PC thumbnails, it seems they are only available for imagery sources, not for mosaics. If we want to display thumbnails for mosaics, we need to show a static map with the mosaic as the base layer on each card.
We've implemented this ticket. Continued revision and work is detailed in #85
We will now start supporting Sentinel. There are a couple implications here:
Once the user selects imagery and timeframe, the map should update with the new mosaic. We should also use this to tell the inference API to use the tiles.
@LanesGood @vgeorge this requires some thinking of the design side. cc @ingalls @vincentsarago @batpad