Open happy-turtle opened 2 years ago
maybe to make a min stack to store closest pixels in linked list and drop the far pixels is a good way. because we actually hard to see the pixels that are obscured by the front pixels, unless the front pixels are kind of invisible. in this case, i think the size about 18 is enough for most scenarios,...
maybe to make a min stack to store closest pixels in linked list and drop the far pixels is a good way.
Good idea! I think a problem might be how we can detect which pixel is further away? We currently only write to the list at list creation, but comparing the pixel depth will require reading the list at the same time. And then you can't be sure if all other pixels are already in the list? This would probably require some kind of atomic operation.
maybe to make a min stack to store closest pixels in linked list and drop the far pixels is a good way.
Good idea! I think a problem might be how we can detect which pixel is further away? We currently only write to the list at list creation, but comparing the pixel depth will require reading the list at the same time. And then you can't be sure if all other pixels are already in the list? This would probably require some kind of atomic operation.
the limitation of the list size only happens when store linked list result to a block list, the linked list self doesn't have this problem.
in this case we can make insert just insert, and use min stack to filter the smallest values when reading the list.
those steps are simple on cpu side, but i'm not sure the performance and the constrains of it if i implement the min stack on gpu... maybe try it after i submit my homework...
The current implementation and graphics pipeline structure lead to a problem: A fixed per-pixel list length has to be set before rendering. That means if during render too many fragments overlap at one pixel location the algorithm will fail at correctly blending those pixels.
There is no way to dynamically set the pixel list length. The alternative, which is already proposed and implemented by others, is to gracefully degrade the transparency blending. Visual coherency is retained at the cost of losing alpha blending accuracy. I am not sure if this requires additional synchronization, some papers mention requiring pixel synchronization for this approach.
To solve this problem we would need to find an algorithm that can be implemented with Unity. Some research suggestions: