Open axodox opened 3 years ago
It sounds like there's a fence wait or signal that's not being satisfied somewhere. Are you able to share a way for us to reproduce this?
Yes, I am thinking on it, obviously I could share it as an unreal engine project, but unless you work with that it can take some time to set it up. I could also try to create a separate app and check it there, but it could turn out that without the other Unreal Engine stuff it will behave differently.
Well at least now I got it working with sharing the texture from DX12 to DX11 and doing the copy there. My first expectation would be that for such case DX11on12 could be faster, I would definitely be interested to compare the performance, should the DX11on12 also work.
I wouldn't expect a huge difference between sharing a surface across two different APIs, vs. unwrapping via 11on12 mapping layer. In both cases there's fundamentally going to be some synchronization and then work submitted that references the same underlying GPU memory.
Here the code I am using in unreal: https://gist.github.com/axodox/d906628a8df4a35a51484cbf8593c119
As this my first time with DX12 I expect some of the usage is bad, but the issue happened even without using / wrapping any resources. Just creating the DX11on12 device and giving it the pool had triggered it.
So yes even if I comment stuff out like this, it still stops returning frames after the frame pool is full: https://gist.github.com/axodox/03bd9b6a5dd2bb20a752173aa27c5ed7
I wouldn't expect a huge difference between sharing a surface across two different APIs, vs. unwrapping via 11on12 mapping layer. In both cases there's fundamentally going to be some synchronization and then work submitted that references the same underlying GPU memory.
I expected some difference on the basis that otherwise why would there be a DX11on12, as we could just share the textures in question the old way. But of course maybe there are other angles to this.
Yeah, there's some efficiency to be gained by using the same device and queue, compared to using a shared resource, whose design was really for cross-process usage originally. As a specific example, explicit synchronization isn't really needed, because work on the same queue is guaranteed to be serialized.
Hi there - I'm having this exact issue. As @axodox stated, simply using the combination of D3D11on12 and Windows.Graphics.Capture causes the Direct3D11CaptureFramePool to stop delivering new frames after delivering exactly as many buffers as specified by the numberOfBuffers
parameter to Direct3D11CaptureFramePool::CreateFreeThreaded
. (I've also verfied the same is true using Create
rather than CreateFreeThreaded
)
I don't encounter this issue if I create a D3D11 instance rather than a D3D11on12 instance without any other changes.
This is within a rust crate I'm developing for screen capture. The relevant code is here: https://github.com/AugmendTech/CrabGrab/blob/109a6e1fb2eb1dba0e760eb5fb616f5f2a07f8ae/src/platform/windows/capture_stream.rs#L263 for the creation of Direct3D11CaptureFramePool, and here: https://github.com/AugmendTech/CrabGrab/blob/109a6e1fb2eb1dba0e760eb5fb616f5f2a07f8ae/src/feature/wgpu/mod.rs#L67 for the creation of the ID3D11On12Device.
I have implemented an Unreal Engine 4 module which allows one to use desktop windows as a texture with minimal overhead / latency. It works fine on DirectX11, but I would need to make it work for DirectX12 too. For this I am using the D3D11On12 APIs to get a ID3D11Device, which I can provide to the Direct3D11CaptureFramePool of WinRT on creation. Long story short: I got the D3D12 texture updated, but it only works until the capture frame pool fills up, as the frames are not closed and the FrameArrived event is not firing anymore.
To isolate the issue I have removed all rendering code besides creating the D3D11On12 device and the pool. It still does the same thing. It seems that when using a D3D11On12 device the frame pool fails. I am not experienced with DirectX 12 yet, I have tried issuing flush commands to the command queue etc. but the issue persists. I am assuming the D3D12CommandQueue I extracted from unreal engine gets executed, since otherwhile the D3D11 => D3D12 texture copies would never be performed, but they are. I am not sure how the capture frame pool works, but in theory it should be able to reuse frames after I close them.
I am doing something wrong here? Or is this a compatibility issue?
I guess my other option would be to create a D3D11 device and use shared resources with the D3D12 one, but that will have more overhead.