Closed azeno closed 1 year ago
As an alternative we could also add a Warn node to the ReadBackBuffer when it blocks. However I still believe that the approach taken in NDI is better, it uses a dynamic queue structure internally to setup the right amount of staging textures.
The buffer readback is just a copy of strides image readback and should be PRed back to the main repo: ImageReadback.cs
if it can work adaptively (or is it already and the delay setting is just a max delay setting?) then that would be much better, of course, and can also be merged back to stride.
if the read back is adaptive, it is just important to know how many frames the data is old, to know which frame it belongs to.
Just looked it up, and indeed it uses a maximum (which is also set too low btw) - https://github.com/vvvv/VL.StandardLibs/blob/main/VL.Stride.Runtime/src/Video/TextureToVideoStream.cs#L70
But since it's a maximum we could set it higher (like 16) without having to fear the default causes the data to be too outdated.
Describe the bug The current
ReadBackBuffer
node which is used by nodes such asPipet
orTextureToImage
can cause severe frame rate issues once it starts blocking. Also note thatTextureToImage
is used by our IO boxes and tooltips.Which version are you experiencing the bug with? 5.0-preview735
Is there an earlier version where this still works? 5.0-preview588
To Reproduce See attached patch
Expected behavior These nodes shouldn't block at all. If that is needed for some reason then they should use an input pin to do so.
Additional context This issue surfaced recently in a project. Initially we thought it's the switch of Stride from the old swap chain blt model to the more performant flip model. But as it turns out stalling the GPU by doing a blocking read on it seems to have a different effect on a flip based swap chain than on a blt based one. In any case, one should avoid a block in all cases.
BlockingPipetCausesInstableFPS.zip