Open huningxin opened 9 years ago
Posted the discussion email thread here:
----------------------------------START----------------------------------------------------
Hi, Ningxin, Everything looks good to me! I like your proposal. I think I have no question now. Thanks,
CTai
2015-06-08 10:40 GMT+08:00 Hu, Ningxin ningxin.hu@intel.com: Hi CTai,
CTai wrote:
- Can we make the RGB+depth synchronization work with VideoWorker? A: I think we should discuss below situations and come out a solution. If we add an optional attribute, inputDepth, into VideoProcessEvent.
- video:true, depth:true In depth track=>VideoProcessEvent{ inputImage, inputDepthMap, outputImage} In video track=>VideoProcessEvent{ inputImage, outputImage}
I propose to introduce a new interface (RGBDProcessEVent, I leave the naming to others):
interface RGBDProcessEvent : VideoProcessEvent {
readonly attribute DOMString depthTrackId;
readonly attribute ImageBitmap inputDepthMap;
readonly attribute ImageBitmap? outputDepthMap;
};
Main.js
navigator.mediaDevices.getUserMedia({
depth: true,
video: true
}).then(function (mediaStream) {
var videoTracks = mediaStream.getVideoTracks();
videoTracks[0].addWorkerMonitor(RGBDWorker);
var depthTracks = mediaStream.getDepthTracks();
depthTracks[0].addWorkerMonitor(RGBDWorker);
}
RGBDWoker.js
onvideoprocess = function(event) {
processRGBD(event.inputImageBitmap, event.inputDepthMap);
}
- video:true, depth:false In video track=>VideoProcessEvent{ inputImage, outputImage}
No Change.
- video:false, depth:true In depth track=>VideoProcessEvent{ inputImage??, inputDepthMap, outputImage??}
As we are going to extend ImageBitmap to support depth map, existing VideoProcessEvent interface can handle depth only use case.
I think the problem is how could we deal with the third case? Is that means inputImage is nullable?Or we just set the inputImage the same as inputDepthMap. What is the value of outputImage in third case when you use AddVideoProcessor? I suggest just an normal outputImage like VideoTrack. I saw below comment from example 2 in [2].
As WebRTC will support depth track later, web apps can implement depth pre-processing (like edge noise canceling) algorithm in VideoWorker and pipeline to PeerConnection.
// wire the depth stream into another
This is a good point!
What is your thought?
Another question, should we support outputDepthMap in VideoProcessEvent? I don't have a concrete use case for this need.
See depth pre-processing use case above.
Thanks, -ningxin -----------------------------------------END-------------------------------------------------------
A
depth stream track
means a MediaStreamTrack object that represents media sourced from a depth camera. It is specified in Media Capture Depth Stream Extensions.Depth stream track alone or together with video stream track can enable innovative applications on web, such as gesture recognition, background removal video conference and 3D scanning. See more use cases at http://www.w3.org/wiki/Media_Capture_Depth_Stream_Extension.
For depth only use case, say gesture recognition, with extending ImageBitmap extension to support DEPTH image format (https://github.com/kakukogou/spec-imagebitmap-extension/issues/1), existing mediacapture-worker is capable.
For depth and video stream track together use cases, say background removal video conference and 3D scanning, it requires spatial and temporal synchronization.
mediacapture-depth
spec already specifies that when requesting both video and depth stream, user agent needs to capture both color and depth video in synchronized way. So when doing post processing inVideoWorker
, web application can associate one video worker to both depth stream track and video stream track. Ideally, web application can access synchronized color image and depth image in oneVideoProcessEvent
. It requires to extendmediacapture-worker
spec to support this use case.