Closed almarklein closed 4 years ago
Bit of an update. It's a bit technical. But writing this down is a good exercise anyway.
I've moved in the direction of having three places where GPU data can be specified:
Each object (uniform buffer, storage buffer, texture) has a slot. This creates a problem, because how do you match the slots up and deal with the fact that there is one namespace? Good thing that wgpu has bind groups (now I finally understand their use)!
So we simply use 3 bindgroups. The stuff in bindgroup 0 is well-defined (it's always the same scene related info). The stuff in bindgroup 1 depends on the geometry. This is pretty well-defined too, since materials kinda only work for a specific geometry anyway. The stuff in bind group 2 is defined by the material itself, so you can go all crazy here, nobody else cares.
All bind groups are available in all shaders (for now). There are no vertex buffers atm, because you cannot bind a buffer as vertexbuffer and storagebuffer at the same time. Maybe we can work around that. But maybe we don't have to. I kinda like how all data is threaded equally.
Not sure if this is the best approach and how if matters for performance, but at least it's a step in a direction that will allow creating more materials and geometries.
- scene/object: info on transform, canvas size, camera. Lights would also go here.
- geometry: the index buffer and data (buffers / textures)
- material: other data that defines the appeance of the object. Can also include additioal buffers that are used internally (e.g. to hold normals that we calculate with a compute shader, or triangles generated from line data).
There are two differences compared to THREE.JS here:
1) Textures go into materials, because they are assigned via uniforms (I think). See line 66-69 of this example: https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L66-L69
Example: https://threejs.org/examples/?q=texture#webgl_materials_texture_rotation
2) Viewport/canvas size goes into the Renderer. This is because there may be multiple cameras in use. The projection/view/screen matrix can be broken up neatly such that cameras only bring the world into projection/view space and the renderer can then tack on a final transform to go from view to screen space. I also think the ScreenCoordsCamera
shouldn't exist for this reason.
Each object (uniform buffer, storage buffer, texture) has a slot. This creates a problem, because how do you match the slots up and deal with the fact that there is one namespace? Good thing that wgpu has bind groups (now I finally understand their use)!
So bind groups are like namespaces? Every group defines a slot zero? Is that how it works?
All bind groups are available in all shaders (for now).
What do you mean with "(for now)"? Do you have plans?
There are no vertex buffers atm, because you cannot bind a buffer as vertexbuffer and storagebuffer at the same time.
Can you explain that a little more? It's not clear how that follows: if you can bind a buffer as a vertexbuffer and storagebuffer at the same time, you can still bind it as a vertexbuffer. So why are there no vertexbuffers atm?
--
I guess this means there should be a fourth bind group for the renderer then, so it can supply e.g. canvas size and related transforms perhaps?
Woops, I mistakenly posted that overview comment in the wrong PR, sorry!
Textures go into materials, because they are assigned via uniforms
For wgpu a texture is just another resource, like a buffer. So it does not matter technically. I think that in ThreeJS, textures are usually considered "appearance" of a mesh. But what we're building is more science-oriented. I'd say the pixels/voxels of images and volumes should be considered "the data", and therefore be made part of the geometry object. And ... we should thus perhaps rename that class? Let's discuss in a call.
Viewport/canvas size goes into the Renderer
Agreed. I mean that we want to feed these info into the shader. Together with the three transforms (world, (inverse) camera, projection) they form a nice bundle that can be exposed as one uniform struct, I think.
So bind groups are like namespaces? Every group defines a slot zero? Is that how it works?
Yes, bindings are collected into groups. So far we just used/assumed group 0 everywhere. I should do some more reading into these bind groups to know more about performance etc. we don't want to mis-use them ...
What do you mean with "(for now)"? Do you have plans?
It seemed nice to make all bindings available in all shaders. But I'm not sure what that'd do to performance.
Can you explain that a little more?
Let's consider a Line material that has a compute shader that converts the line data into triangles. Compute shaders don't have vertex buffers, only storage buffers. So perhaps we should expose them differently depending on the shader ... Instead we could use storage buffers for everything, making things a lot simpler. From what I've read it does not matter for performance if you use a vertex-buffer or storage-buffer.
All this said: this needs more research ...
Instead we could use storage buffers for everything, making things a lot simpler. From what I've read it does not matter for performance if you use a vertex-buffer or storage-buffer.
It seems weird that the API would provide vertex buffers at all then if the only difference is just less convenience...
Woops, I mistakenly posted that overview comment in the wrong PR, sorry!
😄
Textures go into materials, because they are assigned via uniforms
For wgpu a texture is just another resource, like a buffer. So it does not matter technically. I think that in ThreeJS, textures are usually considered "appearance" of a mesh.
Unreal: https://docs.unrealengine.com/en-US/Engine/Rendering/Materials/IntroductionToMaterials/index.html Unity: https://docs.unity3d.com/Manual/Textures.html Godot: https://docs.godotengine.org/en/3.2/tutorials/3d/spatial_material.html#material-colors-maps-and-channels Three: https://threejs.org/docs/#api/en/materials/MeshBasicMaterial
All the big boys have textures as material parameters. Things like albedo, normal, bump, etc. It makes sense to me since the shader will have to specifically support a certain combination of texture maps. I guess we already kind of support this by having uniforms and shaders in our materials?
But what we're building is more science-oriented. I'd say the pixels/voxels of images and volumes should be considered "the data", and therefore be made part of the geometry object. And ... we should thus perhaps rename that class?
Well, the way I see it, visvis2 is just a scene graph implementation packaged up with a forward renderer. That doesn't strike me as "science-oriented"... we'll be science-oriented when we build an application with visvis2!
I think the existing paradigm covers all our use cases quite well. There's a Texture class (with some subclasses like Texture2D) and some Materials accept Texture parameters. That allows you to sample textures in shaders. E.g. volume rendering or slice rendering can then be implemented with planes and cube geometries linked to specialized materials. I think the Mesh scene graph node supports that scenario already.
Let's discuss in a call.
Sure. :)
That doesn't strike me as "science-oriented"... we'll be science-oriented when we build an application with visvis2!
Good point.
All the big boys have textures as material parameters.
Mmm, true, but this is also for historic reasons. It wasn't to long ago that storage buffers did not exist or were considered special. Now we have them by default. And buffers and textures are quite similar kinds of resources. They're also bound in much the same way in wgpu and SpirV. Things are much more "generic", and I feel we should at least consider passing through this generality in the visvis2 API ...
shadertype_as_ctype()
.SpirVType
toShaderType
.ShaderType
classes cannot be instantiated anymore. It's a description of a type, nothing more.