Hey Tim, for starters I wanted to say thank you and for the myriad openlayers examples and tutorials you've released into the wild. This one in particular really opened up the eyes of the National Wildfire Coordinating Group to the possibilities of the work that organizations such as Boundless, Planet Labs, and Terrabella for wildland fire applications. I'm presently one of the co-leads spearheading the effort to completely re-vamp the content management system for the national interagency dispatch committee (basically the logistics arm of NWCG) and we would like to utilize/integrate OL3 alongside OL3-Cesium as the public-facing GIS solution for the new platform. Anyway, there are some things specific to that effort I would like to discuss with you offline perhaps via email if you have the time, but as it this is an issue, I wanted your thoughts on whether this would be a good feature to add to openlayers 3 standard sources? Static map sources are a dime a dozen, using this particular example as a template I was able to capture live imagery from BLM cameras and cameras from the alert(Tahoe) project and overlay it as another canvas within another openlayers map on http://gacc.nifc.gov/gbcc (at the right upper right corner of the map there I've integrated Matt Walker's OL3 layer switcher, if you click on the "NV BLM Cameras" layer some markers will appear across the rendering of Nevada, and if you hover over them, they'll show the field of view of the camera, and on non-touchscreen devices, clicking on the marker will cause another ol3 canvas to appear showing the feed from the camera (a right-click on the camera feed causes a Jonatas' Ol3 context menu to appear giving you the options to loop through various timescales captured by the camera). Right now, I'm trying to figure out how to position the canvas-view appropriately as an augmented-reality-esque feature in OL3 cesium when the user flips over to 3d view (bottom Left hand corner, probably better in full screen) and I'm almost there once I can solve the camera positioning problem presented by the zoom functionality of the camera, about there). Anyway, our fire meteorologists were also intrigued by your ol3-mapbox video hack (got it working quite easily) and using it as a guide, I was able to get another mapbox demo working in openlayers, seen here at http://gacc.nifc.gov/gbcc/pwat.php (doesn't seem to play nice with mobile) -- anyway, there is a great deal more we'd like to be able to do with animated map demos given the incredible amount of GIS data on fire perimeters we generate every fire season, and the possibility of using ffmpeg generated videos of say satellite imagery from planet labs of the earth's actual terrain overlaid with the actual fire perimeter polygons (those are also available as a layer on the gbcc map for active fires, we generate them with nirops flights every night-- as an aside openlayers 3 was about the greatest thing to happen to public-facing websites and really forced some of our ESRI contractors to step-up when confronted with some of the functionality and rendering performance of the ol3 library vs arcgis online's offerings). Right now we're limited to heatmap hueristics to brief budgetary authorities as to the severity of the fire situations such as those employed with ol3 solutions/hacks here, here, and here: http://gacc.nifc.gov/gbcc/firecesium/examples/vectorHeat.htmlhttp://gacc.nifc.gov/gbcc/firecesium/examples/totalHeat.html and http://gacc.nifc.gov/gbcc/firecesium/examples/seriesHeat.html
Anyway, given the enormity of the numbers of fires produced in a given season and the ideal goal of being able to see them flit in and out of view as they ignite and burn and eventually are snuffed out (or continue burning until snow finally falls), asking the canvas to render and animate all those features is certainly possible but it seems like video layers/overlays could really shine as a solution and free up the renderer to take on other tasks (ie, instead of trying to animate features on the fly, it could just lay down a static image captured from an html5 video element every 1/60th of a second and project it onto the canvas). An out-of-the-box dynamic source api seems like it could be really helpful. There's presently a lot of ways to do things, and from what I can gather (likely very wrong though) is that no mapping library really has thought the inclusion of projected video as a source of data into the core-offerings of a library is all that important. We're producing a great deal of data using photogrammetry and remote video sensing, and the augmented reality view stuff seems to be noticeably absent from most mapping applications. Furthermore and this is probably somewhat off-topic, in discussing the fire dispatch community's need to say track resource assets like airtankers, helicopters, engines, smokejumpers, and hotshot crews traveling to and from incidents, we have dynamic streams of data coming from spidertracks to power our Automated Flight Following application. Anyway, after hacking away at tsauerwein's flight path animation example we were impressed by ol3's ability to keep up in terms of rendering performance while zooming and panning, presently our esri-powered AFF application has to completely redraw to update. It would be nice to hear your take as to what kind of pull requests might be necessary that might make the ol3 library a little more friendly to dynamic data or enhance the present api to make handling that sort of thing more straightforward, especially now that things like streams are on the horizon.
By the way, OL3 has been an amazing tool, and we can't wait to harness it throughout the wildland firefighting community. Again, a short conversation offline regarding some other matters would be awesome if you can find time for it.
Sorry if the issue isn't clear, I was inquiring whether an ol video source might be part of the ol3 api or if something more general like a ol.imageDynamic source might be appropriate
Hey Tim, for starters I wanted to say thank you and for the myriad openlayers examples and tutorials you've released into the wild. This one in particular really opened up the eyes of the National Wildfire Coordinating Group to the possibilities of the work that organizations such as Boundless, Planet Labs, and Terrabella for wildland fire applications. I'm presently one of the co-leads spearheading the effort to completely re-vamp the content management system for the national interagency dispatch committee (basically the logistics arm of NWCG) and we would like to utilize/integrate OL3 alongside OL3-Cesium as the public-facing GIS solution for the new platform. Anyway, there are some things specific to that effort I would like to discuss with you offline perhaps via email if you have the time, but as it this is an issue, I wanted your thoughts on whether this would be a good feature to add to openlayers 3 standard sources? Static map sources are a dime a dozen, using this particular example as a template I was able to capture live imagery from BLM cameras and cameras from the alert(Tahoe) project and overlay it as another canvas within another openlayers map on http://gacc.nifc.gov/gbcc (at the right upper right corner of the map there I've integrated Matt Walker's OL3 layer switcher, if you click on the "NV BLM Cameras" layer some markers will appear across the rendering of Nevada, and if you hover over them, they'll show the field of view of the camera, and on non-touchscreen devices, clicking on the marker will cause another ol3 canvas to appear showing the feed from the camera (a right-click on the camera feed causes a Jonatas' Ol3 context menu to appear giving you the options to loop through various timescales captured by the camera). Right now, I'm trying to figure out how to position the canvas-view appropriately as an augmented-reality-esque feature in OL3 cesium when the user flips over to 3d view (bottom Left hand corner, probably better in full screen) and I'm almost there once I can solve the camera positioning problem presented by the zoom functionality of the camera, about there). Anyway, our fire meteorologists were also intrigued by your ol3-mapbox video hack (got it working quite easily) and using it as a guide, I was able to get another mapbox demo working in openlayers, seen here at http://gacc.nifc.gov/gbcc/pwat.php (doesn't seem to play nice with mobile) -- anyway, there is a great deal more we'd like to be able to do with animated map demos given the incredible amount of GIS data on fire perimeters we generate every fire season, and the possibility of using ffmpeg generated videos of say satellite imagery from planet labs of the earth's actual terrain overlaid with the actual fire perimeter polygons (those are also available as a layer on the gbcc map for active fires, we generate them with nirops flights every night-- as an aside openlayers 3 was about the greatest thing to happen to public-facing websites and really forced some of our ESRI contractors to step-up when confronted with some of the functionality and rendering performance of the ol3 library vs arcgis online's offerings). Right now we're limited to heatmap hueristics to brief budgetary authorities as to the severity of the fire situations such as those employed with ol3 solutions/hacks here, here, and here: http://gacc.nifc.gov/gbcc/firecesium/examples/vectorHeat.html http://gacc.nifc.gov/gbcc/firecesium/examples/totalHeat.html and http://gacc.nifc.gov/gbcc/firecesium/examples/seriesHeat.html
Anyway, given the enormity of the numbers of fires produced in a given season and the ideal goal of being able to see them flit in and out of view as they ignite and burn and eventually are snuffed out (or continue burning until snow finally falls), asking the canvas to render and animate all those features is certainly possible but it seems like video layers/overlays could really shine as a solution and free up the renderer to take on other tasks (ie, instead of trying to animate features on the fly, it could just lay down a static image captured from an html5 video element every 1/60th of a second and project it onto the canvas). An out-of-the-box dynamic source api seems like it could be really helpful. There's presently a lot of ways to do things, and from what I can gather (likely very wrong though) is that no mapping library really has thought the inclusion of projected video as a source of data into the core-offerings of a library is all that important. We're producing a great deal of data using photogrammetry and remote video sensing, and the augmented reality view stuff seems to be noticeably absent from most mapping applications. Furthermore and this is probably somewhat off-topic, in discussing the fire dispatch community's need to say track resource assets like airtankers, helicopters, engines, smokejumpers, and hotshot crews traveling to and from incidents, we have dynamic streams of data coming from spidertracks to power our Automated Flight Following application. Anyway, after hacking away at tsauerwein's flight path animation example we were impressed by ol3's ability to keep up in terms of rendering performance while zooming and panning, presently our esri-powered AFF application has to completely redraw to update. It would be nice to hear your take as to what kind of pull requests might be necessary that might make the ol3 library a little more friendly to dynamic data or enhance the present api to make handling that sort of thing more straightforward, especially now that things like streams are on the horizon.
By the way, OL3 has been an amazing tool, and we can't wait to harness it throughout the wildland firefighting community. Again, a short conversation offline regarding some other matters would be awesome if you can find time for it.