Closed johguenther closed 1 year ago
From what I see in the spec anariRenderFrame
doesn't require previous render operations to have completed. It is unclear what happens on wait/map when it has been called multiple times. Does it wait for and return the most recent render (implicit discard on the previous frame) or does it return them in order? This may need to be device/frame specific behavior that can be queried or configured (such as the number of internal back buffers).
In the case where rendering happens directly to a screen an interop extension could somehow provide a special frame object representing that and operate in a fire and forget fashion (roughly how OpenGL works). Or provide a swapchain object that provides the next frame object on request (Vulkan/DX)
One can also always manually create multiple frame objects and cycle those.
This may need to be device/frame specific behavior that can be queried or configured (such as the number of internal back buffers)
I think leaving it unspecified for now is fine and we should get some extension prototypes built for this -- designing something without any measured data here wouldn't be prudent, as performance is the entire point. It's worth noting that performance isn't "bad" per-se with the status quo, but this is trying to guarantee (at some level) "ideal" performance.
In the case where rendering happens directly to a screen an interop extension...
There are a few design options here that we should try: display interrupts, explicit event synchronization, multi-frame object linking (i.e. explicit pipelines), more prescriptive semantics for the existing API, etc. Lots of ideas to explore.
In terms of ideas I think we should try -- I'd like to see more graphics API interop (GL, Vulkan) and look into synchronizing better there. Then we can see what our render work scheduling gaps look like given the constraints of the API used to actually display frames.
In the case where rendering happens directly to a screen an interop extension could somehow provide a special frame object representing that and operate in a fire and forget fashion (roughly how OpenGL works). Or provide a swapchain object that provides the next frame object on request (Vulkan/DX)
Do we have any back ends that are ready to try these ideas?
I'm not worried about FrameBuffer interops, this will likely work somehow. It is true that
anariRenderFrame doesn't require previous render operations to have completed
i.e., multiple (async) Frames can indeed be in flight simultaneously, but the main issue is that we cannot change the objects while the frame is rendered, because (Sec 3.6.)
Calling
anariCommitParameters
while an object is participating in a rendering operation is undefined behavior.
What is the point in queing-up renderFrames if e.g. the camera cannot be updated inbetween?
What is the point in queing-up renderFrames if e.g. the camera cannot be updated inbetween?
Convergence rate when frames are being progressively refined. I'd imagine that renderers which don't refine could just do a no-op after the first frame, so there's no reason for the app to need to worry about whether the frame is progressively refined or single-shot.
the main issue is that we cannot change the objects while the frame is rendered, because (Sec 3.6.)
I actually think the statement "Calling anariCommitParameters while an object is participating in a rendering operation is undefined behavior"
isn't actually required anymore -- that sounds like a legacy holdover from really old commit semantics when commits meant more than just "use the parameters in the next frame". Is there a reason we need this?
I'm not worried about FrameBuffer interops
My point was more about actionable items for what code gets written next. I don't think frame interop and throughput discussions are entirely isolated concepts from an implementation perspective, so catching up on the interop side will give us a better idea of what constraints we have with GL/Vulkan/CUDA/etc.
Calling
anariCommitParameters
while an object is participating in a rendering operation is undefined behavior.
I also missed that one. But that seems wrong to me. Similar to how mapping arrays works this should at worst block and wait on any dependent async operation to finish.
Per 5/31/23 WG call, we want to remove the restrictions currently put on applications to avoid using anariMapArray()
and anariCommitParameters()
on objects currently participating in a render operation.
This issue serves as a discussion point whether ANARI aims to support high-throughput rendering and if yes, do we need additional APIs or specification text. One example of high-throughput rendering is real-time rendering on GPUs, to achieve that multiple frames are in flight at any given time, i.e., multiple renderFrame calls with the same Frame object, double / triple buffering of the framebuffer and associated scene data (a pipeline) is needed.