fract4d / gnofract4d

A fractal generation program for linux
Other
116 stars 27 forks source link

Antialiasing: potential improvement #120

Open mindhells opened 4 years ago

mindhells commented 4 years ago

Here's what I understand of how antialiasing is working: For every pixel (which is an area, not a point, within the complex plane) we take the center value (in the complex plane) to calculate it's fate, index, and color. This would be the 1st pass, no antialiasing yet. In the 2nd pass, for every pixel (remember is an area), we divide it into 4 areas, called subpixels, and calculate the corresponding color for the center of each. Then we assign the whole pixel the average color of the 4 subpixels (aggregate every color channel and divide by 4). Assuming no other improvements in place, this is 4x the cost of calculating the original image but also some extra space is needed: the image class, which is holding the pixel color buffer, also holds some subpixel information (fates and indexes)... I haven't found how it reuses this information, although it seems it has the intention.

I have no background in image processing, so maybe I'm missing something, but given this is not a traditional antialiasing algorithm (it's more like smoothing) I'm wondering if the following improvement is possible (and by the way make use of the subpixel information buffer): When you divide the pixel in 4 subpixel, instead of calculating the color corresponding to the center of each, calculate the outer vertex. This would mean that adjacent pixels would share common subpixels, reducing the total amount of calculations ( (x+1)*(y+1) to be more precise which is far less than the current x*y*4). Not sure how this would affect the final result, but since the subpixels would be farther from the center ... I hope it's smoother.

I'm only considering the best antialiasing mode in this explanation. There's another mode called fast which prevents part of the calculations based on adjacent pixels likeness.

edyoung commented 4 years ago

That would be cheaper, but I think the effect is a more like blur than subsampling. You wind up computing another grid offset by half a pixel from the pixel centers, then average 4 of them to calculate a pixel center. It will look smoother than not doing any averaging but the current approach will provide more detail. But feel free to try it

mindhells commented 4 years ago

I see... I guess I should look for areas of the image with more "entropy" to find out how it really works compared with the current approach. I think it should be easy to implement it so I will try it... have to think about how to measure the effect though. On the other hand, do you remeber how you ended up with the current approach? I mean, Is subsampling more suitable because of the nature of fractal images? performance reasons? ...

edyoung commented 4 years ago

I wouldn't claim the current approach is based on very rigorous theory. Essentially we calculate at a higher resolution and average the results. But a different arrangement of samples could provide better speed/quality trade-off. In particular some random jitter on the subsampled points could reduce moire effects.

I do think the 'fast' antialiasing option is pretty useful. The results are indistinguishable from 'best' and much faster. The only difference is we guess 'well, this pixel is the same as it's neighbors so the subsampled are probably the same too' and just skip that pixel.

mindhells commented 4 years ago

I'm dropping here a couple of interesting entries from wikipedia:

The 1st one took me to the 2nd, in which you can see different subsampling patterns. I understand we're currently using uniform distribution and you think the random approach could also be worth checking.

The fast enhancement could apply to every of those methods I think. It's a great improvement.