Open ghost opened 12 years ago
Stumbled upon this issue just recently. As I'm reading a couple of tomes about OpenGL, it did occur to me that either Terasology already uses a geometry shader to generate the world mesh or it might want to. There will always be budgetary issues, that is, even with its formidable parallelism a GPU is a finite resource. But I doubt the CPU is the right place where to generate such mesh if the GPU can take care of it just by having centerpoint, texid and, I would add, light levels.
The GPU might even take care of determining block visibility in the vertex shader, but I know next to nothing on the subject, I just suspect it might be beneficial to do it on the GPU.
One good place where to test these possibilities from the onset would be issue #319. By my understanding faraway chunks would be handled separately from the world mesh near the player. This would allow for the technology mentioned here to be developed in isolation and eventually, when proven useful, it might be incorporated into the near world mesh.
For note in my past experiments with geoshaders shows that when you do a lot of point creation within them it adds substantial frame rendering time. I did something similar of sending a 3d integral texture of the values being texture indexes to an atlas and generated points based on that, it did substantially reduce the data to the card but it did increase the drawing time over normal methods. This was on first-generation geoshader supporting cards however, maybe it has improved since then. Geoshaders seem to be better set for refining a model instead of generating vast amounts of model data. What I found the best overall was just a terrain with flat sections collapsed as much as possible with an integral 3d texture and the atlas sent to a shader that just did a look-up on the atlas based on the 3d texture value for that block. You can even do things such as blending values (I sent blendable mesh's, like grass and dirt in a separate pass from non-blended like machines) across them quite easily as well. Was not a very expensive shader either, just a dependent texture lookup, which although was more costly on much older cards is less so of an issue now. I used it for a marching-cubes-like terrain but it would work just as well on blocks.
By Beglas request ill leave this here: In order to reduce the memory footprint (esp. on the GPU) we might utilize geometry shaders to render cubes from points.
NOTE: I have no idea about the performance implications. I would like to test it but i currently have no suitable machine available
Idea:
While this approach might increase the burden on the shading units, it might significantly reduce the amount of data to send to the gpu. Worst Case improvement (for geom data) is 75% reduction (4 vertices -> 1 vertex) Best Case reduction is 85% ( 7 vertices -> 1 vertex)
EDIT: This should ofc only be used for rendering the blocks.
EDIT 2: And since we are no longer rendering a mesh, we might sort the cubes according to textures, thus we might be able to reduce state changes