Open robclark opened 8 years ago
This gets into some topics I had hoped to postpone, but I do have some thoughts. Allocating the surfaces is just the first step towards solving the set of problems I listed in the XDC presentation. After that, you need to figure out how to use them. Part of that is beyond the scope of an allocation API that doesn't do any sort of rendering on its own, but part of it relies on the allocation API to be the arbiter of certain properties. Ultimately, I think the below will be split into multiple separate issues, but since they're all relevant to your questions, listing them here for now:
Coming back around to the broader question, given the above, my view is that GBM could be implemented as a wrapper on top of the allocator library. However, I don't see current GBM sitting side-by-side with the allocator library, and I don't view the current GBM<->EGL paths as correct solutions. Even with a GBM implementation sitting on top of the allocator API defined here, there wouldn't be sufficient information available to implement an EGL window surface on top of a GBM surface, nor to import a foreign allocator surface into GBM and begin displaying it, for the reasons outlined above in the state transitions bullet.
This does end up sounding more like a "buffer manager" than simply an allocator. Although possibly the only thing really needed that isn't already covered elsewhere (gbm, fence-fd's and the related EGL extensions, etc) is surface state transition.. I guess we have half of that in the form of usage_t
. We just need an API to do something like: int buffer_transition_to(alloc_bo_t *bo, alloc_dev_t *dev, usage_t *new_state)
.. plus maybe some api to get/set fence fd's??
Not sure if you were implying this, but I don't think the allocator library shouldn't actually perform the transitions. That requires a graphics command buffer in most cases, and I don't want the allocator to have to get in the business of setting one of those up and handling all the synchronization with other command buffers it entails. It should just describe the transitions in a format other APIs can consume. For example, Vulkan already has extensive infrastructure for doing state transitions. The allocator could just export some data that can be passed into a Vulkan pipeline barrier as an extension.
I was assuming that allocator backend was just a shim that calls into driver to do whatever. But yeah, I guess if that was exposed as a "liballoc" API then we'd need some sort of context object that maps to an EGLContext/etc, and that probably isn't a direction we want to go..
Originally was thinking this way because something like a v4l camera/decoder/etc doesn't have some userspace API.. but I guess practically it is only the gpu (gl/vk/cl) drivers that would need state transitions. So I guess if we only care about this when the buffer transitions between a gpu driver and something else (other gpu driver, or something that isn't a gpu driver) then handling it within the gpu api makes sense.
Yes, currently we manage to get away with only the GPU APIs performing transitions. That means compositors tends to require a GPU API if they want to re-purpose buffers behind the client's back (E.g., take a buffer intended for scanout and start using it as a video encode source), but that hasn't been an onerous requirement for anyone thus far. For us, the GPU is the only engine that can perform such a transition fast enough to be useful anyway.
I'd be interested to hear whether this is a problematic assumption for other vendors.
Agree that we want to keep layout transitions out of the liballoc API.
A more generic wording for shunting this to other parts of the stack is to say the native API of the buffer producer should provide the means to do the layout transitions. If the producer is advanced enough to require layout transitions it should provide a way to do them. Currently this matches only GPUs, but may extend to video DSPs or whatever in the future.
Possibly this new thing just ends up sitting on the side as a separate API which only deals with allocation, with some small extension to gbm to get from a
alloc_bo_t
to astruct gbm_bo
? You could also maybe just do that via importing the dma-buf fd into gbm, but it would be convenient for buffers allocated via the gl/vk driver's "liballoc" backend, to be able to recover the originalpipe_resource
or equivalent.The other possibility is new set of EGL extension, plus maybe some optional (ie. not supported by all backends) features like surfaces, so this new thing replaces gbm. Although there are existing users who use gbm to do gl apps on "bare metal" so this would mean that mesa has to support both gbm and new thing as potential args to
eglGetDevice()
..