Open m-7761 opened 3 years ago
The application is open source so there's nothing stopping you from porting it to OpenXR.
OpenXR is certainly the future, but Desktop+ heavily relies on SteamVR's overlay system so 1:1 porting seems to be difficult without creating an equivalent of that. Current proposal (no idea about implementations to be honest) for OpenXR is a fullscreen view-covering overlay extension, which leaves the rest as the burden of the application. Not exactly keen on doing all of that myself for little benefit, so I have no plans for OpenXR right now.
The codebase is not exactly equipped to switch around to another VR runtime right now and is undergoing bigger changes right now with the UI redesign... but well if you want to work on something, I'd say just go for it. Worst case it'd be a good fork. I just don't think it's going to be a simple task.
If you're just after desktop capture code however, I'd recommend starting with the code Desktop+ used as a base instead of trying to pull it out from here. That would be the DXGI Desktop Duplication Sample and the Win32 Capture Sample. They also have more lenient licenses in case you feel the GPL is tying you down in some way here.
Regarding Cliffhouse... I've only seen a bit of that, never used a WMR headset myself. I was under the impression you couldn't disable it, however. Wouldn't that just end up stacking another app on top then? No idea how much impact Cliffhouse has though.
In WMR the "cliff house" (which can be substituted with any 3D model) is I guess a kind of staging area. When you use a real app it does a kind of full-screen transition effect that removes the "cliff house" scene entirely. I think you can turn your set on, then run the better (desktop emulation/pass-through) app and never see the "cliff house" assuming it doesn't force you to do its "look left" "look right" "look down" calibration every single time you use your set. Having to submit to that prompt always annoys me to no end.
If I were to try to patch (I generally hate Git/Github for patching) it would only be if there was a clean way to write a cross-API layer and if there was only a small amount of OpenXR needed to do it. Otherwise I'd be more inclined to just repackage capture/relay code into a new app or one my existing projects. I had no idea where to begin with capture/relay and wasn't sure if Microsoft would be advocating through MSDN for developers to do that, since they might have their own plans. I wonder if you learned things along the way that's not in those documents.
I really appreciate your reply! Edited: Afterthought I don't know about "overlays" but my thinking is just have a regular WMR app that would somehow yield to any WMR app (and obviously any app it runs) provided that can be implemented.
I wonder if you learned things along the way that's not in those documents.
Hm... hard to say. I've mostly relied on the sample code and went from there as it is a pretty decent foundation. In OpenVR's case, where you pass a 2D texture to represent an overlay, just passing what the samples rendered to already got you a working desktop mirror in VR. But here are a few things I remember, which may or may not be relevant to you:
You probably want to be using DirectX 11 for your app. Both capture APIs return ID3D11Texture2Ds and the samples use it as well. In my case this just lined up with the format native to the SteamVR compositor on Windows, so that was no problem (Desktop+ uses some undocumented texture sharing trick to set the same texture to multiple overlays without extra copies thanks to this).
I think that's about it? Developments around Desktop+ these days don't really involve the capture code itself much, but mostly everything else instead. Hope this helps in some way.
Regarding actually building the OpenXR app however... I have zero experience there, so you're on your own. Same applied to me with OpenVR and DX11 when I started this project though, so I'm sure you can do it.
I'm currently working with OpenXR, it's just the way VR seems to be heading in. The WMR API has been replaced by it and SteamVR is implemented in terms of it. But I don't think there's a central authority for it like it is for graphics drivers. On D3D11 I think there may be some interop between it and D3D12 and now OpenGL and Vulkan. I was recently trying to use D3D9on12 to do OpenXR interop, it didn't work out (so far) but just because it doesn't have good performance for my app, but I then tried to share surfaces with D3D9 and D3D12, and this turned out to be a fool's errand even though all the APIs used HANDLE and it totally looked like it should work. I think 11 and 12 are more compatible, or ought to be. Maybe even 10. I think 10 is easier to use.
One thing I wonder about "Graphics Capture" is does it depend on an existing "desktop" that it copies from (i.e. an adapter connected to a monitor) or is it possible for the system compositor to actually go virtual/headless and reroute its output to a capture client that is really a graphical application. (I have a feeling it wants a real desktop, if only to not end up in a scenario where the end-user is using their desktop and doesn't know it. Although that's possible to do if you just turn off a monitor too.)
The other side of that is SendInput. I wonder if there's any scenario where keyboard/mouse input just proceeds as normal. Or if this kind of app must "capture" the mouse and must intercept all keys. Other than this I really appreciate this reply.
I had to go and check to be really sure, but yes, Graphics Capture needs a connected desktop even for window captures to output anything. Headless dongles are a low cost way to get an additional desktop without connecting an actual monitor and work seamlessly in my experience. Virtual display drivers would work as well in theory, but I haven't tried any so I don't which actually work.
I'm not sure if I understand the second question. SendInput injects simulated input events into the system's input queue. It doesn't stop real inputs from happening, so you have to be careful of the current state. Real input doesn't interrupt anything you send in one batch, though. Desktop+ doesn't particularly care about real input. It only tries to avoid doing weird inputs that may glitch out other applications. You don't want to send a key-down event when it's already down, or release things twice.
I'm not sure if I understand the second question.
For the record, like with PSVR on a Windows machine (kind of a hobbyist thing since Sony doesn't implement this) it works in "clone" mode, so the input is really going on the cloned display but you can still see it on your HMD. Whereas if the "desktop" you're currently on is showing a more game style app for an HMD then I wonder how Windows 10 actually interacts with this other desktop. I assume even today there's a system for moving a mouse across multi-monitor systems even though Windows 10 doesn't seem to have the old desktop stretched over monitors system. My question/quandry is wondering if that's the solution, or if not, then which desktop owns/captures the mouse. If the game like system owns/captures the mouse then it must full simulate a user on the remote desktop. If not it can just show what is happening on the desktop. That's the difference. Assuming a 1-to-1 representation of the desktop and not something more exotic. No need to pen a reply to this post or the original :) I was just thinking aloud!
Matthieu Bucchianeri (https://github.com/mbucchia, author of OpenXR-Toolkit and others) is working on an OpenXR API layer to allow overlay apps to use the OpenVR API through OpenXR: https://github.com/mbucchia/OverXR/wiki. As far as I understand this should enable DesktopPlus to run in an OpenXR environment. It might be interesting to get in touch and exchange ideas and informations...
Didn't know about that one yet, neat. I'll certainly keep an eye on it and maybe jump on once there's something to test. There's a lot of quirks in the overlay API and I abuse them here and there so compatibility for some of them would probably be useful. That's more for when basic things work, however. I don't really have the time or expertise right now to help implementing things from scratch there, though, especially if the overall architecture isn't decided on yet.
Following! After spending many (enjoyable) hours in Desktop+ customizing my Elite Dangerous cockpit to include a myriad of Elite's 3rd party tools overlaid "just right" on top of in-ship panels, I'd love to be able to use the same setup directly between Elite and Oculus without going through SteamVR.
It looks like this uses SteamVR. I might try to switch it to OpenXR, or use it as a reference for implementing desktop capture for my own use if you don't mind that. I'm used to using the PSVR with Windows. It has a great system that just passes the regular display through its theater mode, it's way better than anything I've seen since getting a WMR device (HP Reverb G2) and I'm finding the "cliff house" thing (if you've seen it) maddening, I'd really like to bypass as much as possible.
When I'm using VR apps I'm just switching between my desktop and VR app sessions, I don't understand why that can't be transparent like it is with PSVR. That way it's not disorienting. I find it hard to believe anyone exists that uses that cliff house. It's just a disorienting thing from what I can see to have something to fill the background in the set with. On PlayStation you just see the regular desktop. I don't understand what MS is thinking. And why no alternatives.