Open briansemrau opened 1 year ago
I wonder if this can be simplified to work with the RS::SurfaceData struct. I need to look more into it, but I remember reduz wanted to expose the SurfaceData to users in order to make surface construction/management a little easier. Perhaps we could expose SurfaceData and something like SurfaceDataRD to unify the API in an intuitive way.
As a workaround, the above is somewhat doable in 4.2 using Texture2DRD, The idea is to store the geometry output of the compute shader in a Texture with a pixel format that allows arbitrary floats (such as R32G32B32A32Sfloat) and have a custom gdshader that reads from the texture using texelFetch to avoid sampling corrupt the data.
The problem with this is that an ArrayMesh MUST have a vertex array, and the only way of changing the vertex count is by updating the entire mesh, so performance may be degraded if we can't predict before hand what the maximum amount of vertices that will be needed, because even if only a fraction of them are populated in the texture, all of them will try to be drawn, and if the compute shader output more vertices than the mesh has, it will just be truncated and because of the parallelism some glitching behaviour may occur.
So I want to add that if this proposal gets ever accepted, consider allowing the amount of vertices and primitives being drawn be specified without a CPU read or some previous maximum guess, but read the counts from a uint Buffer that was written by the compute shader. similar to how (Graphics.DrawProceduralIndirect works in Unity).
I poked around at this for a bit in my evenings the last week and I suspect that coming up with a good API will be extremely difficult. (My WIP here: https://github.com/clayjohn/godot/tree/MeshRD)
The first issue is that we don't support indirect drawing in our normal renderer. So you need to read back vertex_count, AABB, etc anyway. We could expose a SSBO hint to use CPU-readable memory in order to make that more efficient, but it will be a pain for users.
The second problem is that compute shaders can't be used for things like blend shapes, LODs, or indexing. So this Mesh type would have to be quite limited and inefficient.
Its worth noting that Unity only recently started supporting access to mesh data from compute shaders, and the API looks pretty bare bones, but we could do something similar https://discussions.unity.com/t/feedback-wanted-mesh-compute-shader-access/837706. They support retrieving the buffer directly which you can then operate on at your leisure.
That being said, before we jump into designing an API, we should figure out what the most common use-cases are and cater to those. From my online searches 90% of people that want to build meshes on the GPU are building a mesh representation of voxel or SDF data. If that's the only use-case we need to cater to, we can probably figure out a nice API.
I think a safe starting point would be something like RenderingServer. mesh_surface_update_vertex_region
but for updating the entire buffer maybe mesh_surface_update_vertex_buffer_rd
On top of my head, usecases:
Actually why don't we offer compute as an option for immediate geometry?
Actually why don't we offer compute as an option for immediate geometry?
Yes please
To give a concrete use-case:
For trail rendering (for nodes, not individual particles), tyre tracks, and some simple simulated effects (rope and cloth) I am currently using a CompositorEffect that renders directly into the scene with a draw list, framebuffer etc, with mesh buffers being modified by compute shaders.
This works, with the main limitation that the draw shader has to be constructed from scratch and can't use most builtin shading features, so being able to integrate it better with the scene would be incredibly helpful.
I think the API and usability issues that crop up (which are understandably quite hard to solve) are also pretty tightly related to godotengine/godot#94427 and other similar features in that it probably needs some shader code generation to be exposed to the user, rather than requiring the user to manually define buffer layouts etc.
Blender has quite a robust system for it's internal draw library that generates bindings, includes appropriate libraries etc. for shaders: https://developer.blender.org/docs/features/gpu/overview/#shader-info which may be a good point of reference
Describe the project you are working on
A massively procedurally generated 3D game. The terrain mesh is generated using compute shaders in real time.
Describe the problem or limitation you are having in your project
Moving mesh data from GPU -> CPU -> GPU causes stuttering for larger meshes.
See Sebastian Lague's video and his brief comments working on the same thing: https://youtu.be/kIMHRQWorkE?t=1182
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Similar to https://github.com/godotengine/godot-proposals/issues/6964
Allow the ability to create a MeshRD resource. You should be able to define the surface data using RenderingServer RIDs instead of arrays of data.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
How this is done currently:
How it could be done with MeshRD:
If this enhancement will not be used often, can it be worked around with a few lines of script?
no
Is there a reason why this should be core and not an add-on in the asset library?
requires engine renderer changes