Open pebau opened 7 years ago
@pebau : duplicate https://github.com/opengeospatial/testbed14-ideas/issues/1? Or in other words what is the difference in this ticket compared to https://github.com/opengeospatial/testbed14-ideas/issues/1
oops, you are right - both should be merged somehow. I can gladly help.
-Peter
On 08/22/2017 07:17 PM, Tom Landry wrote:
@pebau https://github.com/pebau : duplicate #1 https://github.com/opengeospatial/testbed14-ideas/issues/1?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/opengeospatial/testbed14-ideas/issues/30#issuecomment-324093546, or mute the thread https://github.com/notifications/unsubscribe-auth/AF0ejtY2kONXyMpA2YtTmusB5G3KCoDFks5saw0pgaJpZM4O-1_B.
-- Dr. Peter Baumann
While where in brainstorm mode... Any use cases for video sequences in datacubes? Does it make sense? I'm thinking about something in the line of ArcGIS video layers or more simply, the georeferenced output of some tracking, segmentation or classification algorithm from a video sequence. It's x/y/t and raster. Might not be the best way to deliver a streaming video, but I'd be curious to see how a datacube approach would help to subset or fuse data from various sources.
oh yes, of course this can be done - we've had fun with that around 2000, see attached.
But typically it does not make sense because the viewing vector changes, and that means: georeference is time dependent in a way that only humans can detect. This is different for fixed cameras; possible use case: watching growth of a place on seafloor from a stationary observatory. Or for videos where the change of the viewing vector is known, maybe because a camera is mounted fixed on an airplane.
Still, even if this change is known slicing along time will be difficult, you need to extract pixels along a trajectory. This is possible with a raster/polygon clipping method we are just adding to rasdaman. Topical also in OGC, cf "corridor queries" and "curtain queries" for aviation.
cheers,
Peter
On 08/22/2017 07:28 PM, Tom Landry wrote:
While where in brainstorm mode... Any use cases video sequences in datacubes? Does it make sense?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/opengeospatial/testbed14-ideas/issues/30#issuecomment-324096437, or mute the thread https://github.com/notifications/unsubscribe-auth/AF0ejsEKSRxCkI_YGmzB-va_RWoJTt5Sks5saw_BgaJpZM4O-1_B.
-- Dr. Peter Baumann
Thanks for the details Peter. I did not find any attachment to your communication though. At risk of diverting your initial idea for https://github.com/opengeospatial/testbed14-ideas/issues/30, let me add context for video processing. If you find this idea worthwhile, maybe we could move or merge your initial Datacube idea to https://github.com/opengeospatial/testbed14-ideas/issues/1 and rename this one to cover datacubed video processing.
My institution, CRIM, has developped and maintain an asynchronous audio/video processing platform (non-OGC compliant pub/sub system) called VESTA. The main use case is video processing for digital humanities, but we did a fair amount of experiments with seafloor videos and submarine dive videos. Indeed, Ocean Networks Canada is part of our research software community sponsored by CANARIE and we've been collaborating with them for the last 4 years or so.
At the moment, we process seafloor video using software video transcoders, video subsetting services (pixel-based ROI, timestamps) as well as change/motion/event detection services. Detections (marine life, camera motion and sediments, mostly) are stored in a NoSQL database. What we haven't tried is to map the observed scene to real-world coordinates. Having a datacube-aware WCS service in front of our video system would allow querying in x-y-z-t coordinates instead of i-j-t pixel space. That's probably not the only advantage we could get from WCS, but you seems to be in a very good spot to know about those advantages!
My team also delivered in the past several other video-based implementations that would also be interesting geospatial use cases: sports science, video surveillance, robot vision, indoor people tracking for stores, people tracking for VR experiments, etc. We've also delivered UAV-based 3D photogrammetry, so we could explore aerial video too.
Hi Tom,
hm, for me the mowglie image opened, but here I try again, see the JPG attached.
What you describe about your work sounds exciting - would you want to present at an OGC TC meeting in the Coverages.DWG meeting (they are the guys dealing with datacubes). Further, given the recent interest in datacubes I am planning on a dedicated WG, however in close sync with the coverages work.
-Peter
On 08/23/2017 01:41 AM, Tom Landry wrote:
Thanks for the details Peter. I did not find any attachment to your communication though. At risk of diverting your initial idea for #30 https://github.com/opengeospatial/testbed14-ideas/issues/30, let me add context for video processing. If you find this idea worthwhile, maybe we could move or merge your initial Datacube idea to #1 https://github.com/opengeospatial/testbed14-ideas/issues/1 and rename this one to cover datacubed video processing.
My institution, CRIM, has developped and maintain an asynchronous audio/video platform (non-OGC compliant pub/sub system https://github.com/opengeospatial/testbed14-ideas/issues/29) called VESTA https://www.canarie.ca/software/platforms/vesta/. The main use case is video processing for digital humanities, but we did a fair amount of experiments with seafloor videos and submarine dive videos. Indeed, Ocean Networks Canada http://www.oceannetworks.ca/ is part of our research software community sponsored by CANARIE https://www.canarie.ca/software/ and we've been collaborating with them for the last 4 years or so.
At the moment, we process seafloor video using software video transcoders, video subsetting services (pixel-based ROI, timestamps) as well as change/motion/event detection services. Detections (marine life, camera motion and sediments, mostly) are stored in a NoSQL database. What we haven't tried is to map the observed scene to real-world coordinates. Having a datacube-aware WCS service in front of our video system would allow querying in x-y-z-t coordinates instead of i-j-t pixel space. That's probably not the only advantage we could get from WCS, but you seems to be in a very good spot to know about those advantages!
My team also delivered in the past several other video-based implementations that would also be interesting geospatial use cases: sports science, video surveillance, robot vision, indoor people tracking, etc. We've also delivered UAV-based 3D photogrammetry, so we could explore aerial video too.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/opengeospatial/testbed14-ideas/issues/30#issuecomment-324181232, or mute the thread https://github.com/notifications/unsubscribe-auth/AF0ejklEUmsfoOB743Dviny_GTfz6mmgks5sa2dHgaJpZM4O-1_B.
-- Dr. Peter Baumann
Datacubes are an emerging paradigm for organizing data in an analysis-ready manner on the server, based on the observation that the zillions of files from, say, a satellite instrument are not as easy to handle as one single spatio-temporal object offering access, slicing and dicing, analysis, and visualization. The OGC Coverage data and service models, "Coverage Implementation Schema" and "Web Coverage Service", allow to naturally model and handle n-dimensional datacubes. Manifold (open source as well as proprietary) implementations have demonstrated feasibility. Still, new questions keep emerging and also general knowledge needs to be improved among stakeholder communities. For T14 it is proposed to advance work on datacube coverages. Work items include:
To avoid excessive implementation overhead practical work should be based on the OGC/INSPIRE WCS reference implementation, rasdaman, which conveniently already implements WCS Core and all WCS extensions on all spatio-temporal dimensions.
This work will not only be relevant within the OGC community, but also benefit ISO and INSPIRE, and beyond.