TheCandidStartup / TheCandidStartup.github.io

The Candid Startup Blog
https://www.thecandidstartup.org
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

2023/03/13/trip-graphics-pipeline #14

Open utterances-bot opened 1 year ago

utterances-bot commented 1 year ago

A Trip Down The Graphics Pipeline

It seems like a long time since I described the general areas for my initial projects. I’ve talked a lot about my cloud based, open source, serverless, customer deployed, scalable spreadsheet project. Today I’m going to switch gears and make a start on Interactive Viewing for Large Geometric Models.

https://www.thecandidstartup.org/2023/03/13/trip-graphics-pipeline.html

Wayne82 commented 1 year ago

This reminds me the old good days when I worked on Navisworks, which its default graphics pipeline is pretty much on the fixed function pipeline, whereas the integration with OGS is on the next era, the programmable shaders, if I don't remember wrongly. It is really exciting to see the evolvement of the graphics pipeline and the new possibilities it can enable.

Regarding the post processing, I am in the impression of that this is mainly a logic post processing that involves application reading back the render target and make a second round of draw call for a simple quad with the render target as an image texture. But, it seems the pipeline itself has the dedicated stage for the post processing, that can also be programmable directly!

Last, a quick question, it mentions that some fragment could ends up not being visible, is it because some of the fixed per fragment operations happening after the fragment shader stage, or there are still some further processes in the post processing causing a fragment being invisible?

timwiegand commented 1 year ago

Funny you should say that, the next post in this topic will be on the Navisworks pipeline and the integration with OGS.

Remember that the pipeline is a conceptual model. With most hardware the operations in the post processing stage are a mix of fixed function hardware (e.g. blend ops) and a pixel shader triggered, as you say, by drawing a screen covering quad. In practice, it doesn't make any difference whether you do that or have a special post process shader in the graphics API.

Imagine drawing a series of quads from far to near. For each quad, all fragments are visible at the time they're processed. You do all the work to shade them and write to render target. Then the next quad is drawn over the top. When the scene is complete only the final quad is visible and you ended up processing loads of fragments that in the end aren't visible. If you move all the shading work to the post processing stage, you wait until everything has been rasterized and visibility resolved, before shading just whatever ended up being visible.