Closed cvan closed 6 years ago
I tested this about a month ago, and the Vive passthrough camera is accessible with getUserMedia
. And yeah, you did need to enable a Steam VR preference. I could see where it might make sense to add some kind of id that would link the camera with the headset, but...
The latency on the camera was way too high to link it with head tracking. And the resolution is quite low. So I wouldn't call this useful for any kind of passthrough display right now.
Not having used getUserMedia a whole lot In not sure what the best way to surface this would be, but it would be nice to be able to either request or identify after acquiring video devices associated with the VRDisplay. Same goes for microphones. Beyond that basic linking, though, it seems like getUserMedia covers the desired functionality as-is?
On Thu, Sep 22, 2016, 4:49 AM Brian Chirls notifications@github.com wrote:
I tested this about a month ago, and the Vive passthrough camera is accessible with getUserMedia. And yeah, you did need to enable a Steam VR preference. I could see where it might make sense to add some kind of id that would link the camera with the headset, but...
The latency on the camera was way too high to link it with head tracking. And the resolution is quite low. So I wouldn't call this useful for any kind of passthrough display right now.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/w3c/webvr/issues/97#issuecomment-248881478, or mute the thread https://github.com/notifications/unsubscribe-auth/AAxJmUQaMtji76F0icOU8lgcuNJDkDstks5qsmtGgaJpZM4KDl97 .
It might be useful to have some information about the camera lense, as I imagine some distortion may be necessary to make sure the camera view matches any rendered objects in front of it and/or to keep people from getting too sick.
I can't find much in the OpenVR documentation, but if I dig around, it looks like there's a GetCameraIntrinisics
method on a class called IVRTrackedCamera
. That might be a good place to start.
But again, I'm not sure how useful this is with such long latency on that camera. I've seen high latency with a number of webcams accessed through getUserMedia
. I don't know if that's something in the browser or in the webcam drivers.
Someone should look at the media-capture-depth extension. At one point, it included proposals for retrieving camera intrinsics in it. The trick, of course, is doing an efficient implementation. This is something I'm very interested in seeing happen as part of the AR project I'm starting at Mozilla, and am talking to folks here about that. In the near term, though, the latency will be a huge issue, I agree.
Current mediacapture-depth exposes near
, far
, focalLengthX
and focalLengthY
camera intrinsics. We are discussing about exposing more intrinsics in https://github.com/w3c/mediacapture-depth/issues/110 about principalPointX
, principalPointY
and lensDistortion
.
getUserMedia
is for general web cam access which is not considered to be synced with HMD head tracking. I guess the camera equipped on HMD has kind of synchronization, like Tango, especially for camera-based inside-out tracking device. We may need some investigations here.
Decision made at the Seattle F2F: For passthrough camera feeds we'll be relying on more general AR mechanisms, and for other needs we should be using getUserMedia
.
Hey! So I have a general question about camera access. I am using a Nokia 8.1 on Android in Chrome Canary. I open up the camera feed through XRSession, but the stream is blurred and doesn't find the focus. Is this a AR-related issue or should I use getUserMedia
? I tried solving this with getUserMedia
and focusMode
but without success. Can anybody help?
@Fehler40 - this repository is focused (no pun intended) on the specification itself. If you have issues with specific implementations and devices, please try https://webvr.slack.com/ or browser-specific issue trackers such as https://crbug.com/ .
The HTC Vive has a passthrough camera so you don't step on your loved ones - pets, pests, and humans alike. Last time I checked you, have to enable a pref in the Developer settings of Steam VR. I expect that to change at some point. In any event, it'd be awesome if the WebVR APIs had access to this. We could leverage
navigator.getUserMedia
(giving us Permissions API provisioning and a proven API for free), but I can imagine cases where you want to gain access to both your PC camera and your headset's camera (for example, to record or project the faces of your friends watching into your VR world). I don't know whethernavigator.getUserMedia
supports multiple cameras. Or, perhaps this could be introduced as a new interface to the WebVR API?