Open pkmiles opened 9 months ago
Agree. @joaoantoniocardoso has also mentioned that at a certain point.
Could you detail how would be the ideal experience with that? Maybe a mini-widget for taking snapshots of a pre-configured stream, and also allowing that to be done from the joystick?
I'm out on the water friday so will give this a bit more thought. I think if you were to follow standard conventions then video and photos are two distinct modes of behaviour (consider how GoPro handles this in their UX). So, I think you're either in video recording mode or photo mode. With a multi-camera setup you'd want to know what camera stream you're referring to - but you don't necessarily want to make it a multi-step process to take a picture | trigger video and then select stream. So.... I'm picturing an enhancement to the existing recording mini widget which would:
I guess there's the case of multi-stream recording / image capture - but that feels like quite different use case... right?
About the multistep process, I think that's it. The main problem today seems indeed to be having to click the recorder button and selecting the stream afterwards, when in 99% of the cases, you want to record the same as you were recording previously, because you usually have only one real stream. And the widget already stores the last stream set, so we can already change the behavior and make it one step only, having to set the stream only if it was never set and there's more them one available.
Apart from that, if the user has multiple streams, it makes much more sense to:
The switch between photo or video is something I'm asking myself about. It makes sense for GoPros and DJI drones as they change the capture mode on the camera, and the photos are much better than a frame of the video would be, but for us, I don't know if we even have this option today in MAVLink, and if so, if the camera manager supports that. Because of that I'm asking myself if it would make more sense if we extend the video recorder widget and add a camera button there, that would take a snapshot of the stream and could be used in all situations (the user being recording or not).
@joaoantoniocardoso @patrickelectric I call you to comment about this support for photos, both in the cameras and in mavlink/mcm.
@rafaellehmkuhl MCM has the thumbnail endpoint on its REST API, but it doesn't implement it for snapshots via MAVlink commands. The Thumbnail has a configurable quality and size (via parameters), and it only uses keyframes, so the quality can be very good.
@rafaellehmkuhl MCM has the thumbnail endpoint on its REST API, but it doesn't implement it for snapshots via MAVlink commands. The Thumbnail has a configurable quality and size (via parameters), and it only uses keyframes, so the quality can be very good.
Right! And I imagine doing those snapshots on the backend, hard-linked to the source, is probably better than doing them from Cockpit.
I'm seeing here that MAVLink has a camera capabilities message. Do we implement that on MCM?
And could we implement the camera control command for taking snapshots with MCM?
I'm seeing here that MAVLink has a camera capabilities message. Do we implement that on MCM?
We do. Today all MCM streams are CAMERA_CAP_FLAGS_HAS_VIDEO_STREAM
.
And could we implement the camera control command for taking snapshots with MCM?
We could, but keep in mind that the image capture would store the image in the vehicle, so the frontend can't access it, just like a video capture.
For Cockpit, it would be useful, just like its own video recording, to have an image snapshot feature.
We could, but keep in mind that the image capture would store the image in the vehicle, so the frontend can't access it, just like a video capture.
Yeah. I was thinking about MCM also making the snapshot available on an API in that case. But I think we can start just with the snapshot on the stream, Cockpit side.
Yeah. I was thinking about MCM also making the snapshot available on an API in that case. But I think we can start just with the snapshot on the stream, Cockpit side.
The snapshot is already available via REST API: blueos.local:6020/docs/index.html?url=/docs.json#/default/getthumbnail
Selfishly pushing my own use case here, but hopefully might be useful for context. :-) I posted this a while back on the BR Forum on what we're attempting to do.
I see triggering image capture from the UI as just one method of stills triggering. Ideally we'd love to see: i) UI triggering (that photo button) ii) Time based, periodic triggering, e.g. one photo every x seconds. iii) Distance based triggering (requires DVL), e.g. one photo every y metres.
Then ideally we'd want to georeference these images using initial Lat/Lng and relative distance travelled from the DVL.
We've been looking at a couple of camera options for this: IP Cameras that support H.264/265 streams - and machine vision cameras (GigE GenICam) that have internal frame buffers. Still looking at what happens top side vs. down at Raspberry Pi.
It would be nice to support proper photo capture for cameras that support it, but I view that as longer term because it requires potentially quite significant MCM changes.
For existing video streams I think it's definitely worth supporting keyframe capture, and if the autopilot is providing positioning estimates then it seems reasonable to inject that into the image EXIF data for geo-referencing.
MAVLink has a variety of camera control modes to support different kinds of triggering, which are worth looking into. As an example, MAV_CMD_DO_CONTROL_VIDEO
can do time-based periodic triggering, and I know there are additional options available within mission planning, like getting the camera to consistently point at a target (I assume there's probably an option for some kind of distance-based triggering as well).
@ES-Alexander :
For existing video streams I think it's definitely worth supporting keyframe capture, and if the autopilot is providing positioning estimates then it seems reasonable to inject that into the image EXIF data for geo-referencing.
Under this approach would storing EXIF geo-reference be done top-side or down in the ROV?
would storing EXIF geo-reference be done top-side or down in the ROV?
It could be either, depending on where the snapshots are being captured, and whether they get sent to the control station at all.
Since there are vehicles with low bandwidth requirements it likely makes sense to do on the vehicle if possible, but if necessary it’s also possible to process all the images in post, aligning the data from the telemetry/autopilot logs with the image timestamps (it’s just more convenient if it’s able to be done live, when the images are being captured).
I like the idea of #594 (Event capture) as a full feature. It would be great to at least support single image capture from a video source. Could be on same widget (like QGC design).