We should look into the feasibility of the Toon Shader -- it looks like we could have many performance wins from a rewrite of this system. Not convinced of the dynanic atlassing approach, as it would require creating new textures, but we are anyways allocating new meshes.
From the technical review:
===
Even though the game uses simple toon shading for its characters, the avatar shader itself is 20k lines long and uses 12 textures as inputs. The length stems from the fact that the shader was created with a shader graph in the past, but this was eventually abandoned and the team continued to use the compiled shader graph code as the main shader. This makes the shader extremely difficult to maintain, understand, and reason about.
In Decentraland, player avatars are highly customizable. To reduce rendering complexity the current implementation merges the sub-meshes of the player avatars into a single mesh which is then rendered using the toon shader. To be able to support the high amount of custom body parts and wearables the shader is then provided with up to 12 textures to sample from. Due to the nature of execution on GPUs, all 12 textures will always be sampled from, regardless if anything is actually bound to them.
As this is such an important part of the game, there are multiple steps we could take to make this shader more performant and future-proof:
[ ] A first step would be to re-write the shader in HLSL. The shading operations themselves are not overly complex and the shader could be probably shrunk down to a few hundred lines of code, which would make it much more maintainable and it would also compile much faster.
[ ] Developing a dynamic atlassing approach to reduce the amount of textures that need to be sampled by each pixel down to one. The shader will only ever need the content of a single one of those 12 textures for the pixel that is currently being processed. Therefore, if all 12 textures would be merged into one, only a single sampling operation would be needed and the shader would only need to adjust the UV offset. A possible approach would be to create a texture atlas at run-time for each player, at the same time when the avatar mesh is baked. This texture atlas could be dynamically sized, based on the amount of constituent sub-textures (1-12) and then compressed, to reduce its size.
[ ] Post-process the user-provided avatar textures at upload time and resize them to reasonable sizes.
We should look into the feasibility of the Toon Shader -- it looks like we could have many performance wins from a rewrite of this system. Not convinced of the dynanic atlassing approach, as it would require creating new textures, but we are anyways allocating new meshes.
From the technical review:
===
Even though the game uses simple toon shading for its characters, the avatar shader itself is 20k lines long and uses 12 textures as inputs. The length stems from the fact that the shader was created with a shader graph in the past, but this was eventually abandoned and the team continued to use the compiled shader graph code as the main shader. This makes the shader extremely difficult to maintain, understand, and reason about.
In Decentraland, player avatars are highly customizable. To reduce rendering complexity the current implementation merges the sub-meshes of the player avatars into a single mesh which is then rendered using the toon shader. To be able to support the high amount of custom body parts and wearables the shader is then provided with up to 12 textures to sample from. Due to the nature of execution on GPUs, all 12 textures will always be sampled from, regardless if anything is actually bound to them.
As this is such an important part of the game, there are multiple steps we could take to make this shader more performant and future-proof: