lucasb-eyer / go-colorful

A library for playing with colors in go (golang).
MIT License
1.14k stars 57 forks source link

DistanceRGB doesn't use linear RGB #48

Closed makew0rld closed 3 years ago

makew0rld commented 3 years ago

DistanceRGB should use linear RGB, as otherwise the distance doesn't really hold any meaning, because it's measuring a non-linear space.

KelSolaar commented 3 years ago

otherwise the distance doesn't really hold any meaning, because it's measuring a non-linear space.

I don't what the aforementioned code does, but at any rate, conceptually, this is not true at all, the opposite is actually what perceptually uniform spaces strive for: producing meaningful metrics from a human observer standpoint and thus measuring distance. CIE DeltaE 2000 is happening in a highly non-linear CIE Lab space compared to an RGB space but it is a space that is trying to be linear for a human observer. It all depends on what you are trying to achieve here.

makew0rld commented 3 years ago

My understanding is that measuring euclidean distance in a non-linear space serves no purpose, and so when you measure euclidean distance in sRGB you aren't measuring anything meaningful. Whereas in linear RGB, the euclidean distance actually represents the distance between each color. What CIE Delta E does is different.

CIE Lab space [...] is a space that is trying to be linear for a human observer

My understanding is that it is actually trying to be linear to machines in a way that matches up with human observations. So if you take euclidean distance in CIELAB, it's supposed to actually match up with how humans perceive the similarity of colors. Of course it turns out they didn't do it perfectly, and had to invent CIE DeltaE to fix that, but that's another story.

KelSolaar commented 3 years ago

My understanding is that measuring euclidean distance in a non-linear space serves no purpose

This is correct!

and so when you measure euclidean distance in sRGB you aren't measuring anything meaningful.

This is not correct, the reason being that while non-linearly encoded sRGB is non-linear from a photometric standpoint, it is actually more linear from a human observer standpoint, i.e. from a perceptual standpoint. If you were to compare the sRGB inverse EOTF and CIE L used by CIE L*a*b\, you would see that while not being the same, they are closer together than a purely linear function.

image

Everything is a matter of reference point. With that in mind, I'm not saying that you should perform colour differences in non-linearly encoded sRGB space but if your goal is to generate perceptually smoother gradients and such things, it is better than linear sRGB.

makew0rld commented 3 years ago

Woah, I didn't realize that. It makes sense though, thanks for explaining.

With that in mind, I'm not saying that you should perform colour differences in non-linearly encoded sRGB space but if your goal is to generate perceptually smoother gradients and such things, it is better than linear sRGB.

I'm trying to do color quantization and dithering, and so I need to find the closest color in the palette to replace the existing one. Obviously there are many ways to do this, with CIEDE2000, etc, but I wanted to do it in RGB for some testing. But it sounds like I should be doing it in sRGB instead of linear RGB.

But wait! This post contradicts you. It shows this image for a dithered gradient in sRGB:

dithered gradient in sRGB

And this for it in linear RGB:

dithered gradient in linear RGB

As you can see, and as the post notes, it's only when it's done in linear RGB that it actually looks correct, the sRGB gradient gets light too quickly.

But maybe that's because that dithered gradient is just performing image operations like adding and subtracting color values, and then just being thresholded against a black and white palette? I'm not sure.

Here look, I've done my own tests with color images.

Here is the original peppers image:

peppers

Here is it with random noise dithering, using euclidean distance in sRGB. The palette is pure red, pure green, full yellow, and black.

random_noise_rgb_red-green-yellow

And here it is, using euclidean distance in linear RGB:

random_noise_rgb_red-green-yellow

Perhaps the colors are better in sRGB, but the dark areas are better in RGB? Overall the linear RGB seems to look better though.

Here's another test. Same original image, but a new palette. I tried to choose this palette based on the colors in the image.

color.RGBA{188, 208, 86, 255}
color.RGBA{199, 58, 41, 255}
color.RGBA{128, 168, 80, 255}
color.Black
color.White

Here's sRGB:

random_noise_rgb_custom_sRGB

Here's linear RGB:

random_noise_rgb_custom_linear

Here's an APNG that goes between the three - original, then sRGB, then linear RGB. You can see how the linear RGB is darker, and the colors appear less washed out. I think it looks better.

output

Or maybe linear RGB is just being biased towards pure black, as your graphs shows. More tests then...

Here's the same palette as before, but with no white or black.

sRGB:

random_noise_rgb_custom_sRGB

Linear RGB:

random_noise_rgb_custom_linear

APNG (original, sRGB, then linear RGB):

output2

Here I think it's actually the sRGB that does better. It keeps the dark areas darker, if you look at the middle green pepper. It's the linear RGB that appears to wash out the image a bit. However the difference is much more slight then previous examples.

I'm not quite sure what to think here. Do you have anymore mathematical insight? Do you agree or disagree about what image looks better? Thanks.

makew0rld commented 3 years ago

Comment removed

This comment used to contain color images, and then used them to draw the conclusion that sRGB is better than linear RGB for color images. The images only appeared that way because they were scaled down and the pixels were averaged. If you displayed the images at 100% size, the linear RGB appears more accurate. You can check the source code of this comment to see the original images.

makew0rld commented 3 years ago

The question that still remains for me is whether linear RGB or sRGB makes sense for grayscale images. Again, not for the actual operations, but for matching to a palette.

makew0rld commented 3 years ago

Black and white palette.

Original image:

gradient

Quantized using linear RGB:

no_dither

Quantized using sRGB:

no_dither

All together:

three_gradients

From this it seems obvious that using euclidean distance in sRGB is better than linear RGB for finding a closest palette match, even for grayscale images. Keep in mind linear RGB is still required for modification of color values, like adding randomness to an image or something.

So it would go:

Sorry for dumping so much in this thread, but I hope it can be hopeful to others too.

KelSolaar commented 3 years ago

Glad this discussion was helpful!

As you can see, and as the post notes, it's only when it's done in linear RGB that it actually looks correct, the sRGB gradient gets light too quickly.

Looking a both of them while squinting with my eyes to "blur" the signal, none look particularly great :) Which one looks closer to the reference gradient for you?

makew0rld commented 3 years ago

Which one looks closer to the reference gradient for you?

Really not that sure, but I believe the second one.


Also, I tried to apply this newfound knowledge to other grayscale images... and it didn't work so well.

Adding randomness in linear RGB, and doing euclidean distance in linear RGB for palette matching:

random_noise_grayscale

Adding randomness in linear RGB, and doing euclidean distance in sRGB for palette matching:

random_noise_grayscale

Clearly, the sRGB one has issues, it's not even keeping pure black as black. Now, perhaps my code has issues, in which case I'll be embarassed to have drawn this all out. But maybe someone can explain whether what I'm seeing here makes sense or not?

There is one big difference between the color image randomness (the peppers above) and this grayscale randomness. With the color image, a separate random number is being applied to R, G, and B. For the grayscale image, there is only one number, effectively making it "more random". This doesn't explain why it isn't leaving the pure black alone though...

makew0rld commented 3 years ago

If it helps, I'll explain a bit more. Sorry about how far off track I've gotten, but I'd appreciate any help.

Obviously, the second image above is some sort of error. These are the steps I followed, which I described earlier:

  • Convert from sRGB to linear
  • Modify color of pixel
  • Convert back to sRGB
  • Calculate euclidean distance between the color and the palette colors (palette must be sRGB too)
  • Set pixel to closest palette color

These steps are causing the issue. Mathematically I can see why this happening. For example, let's take the pure black at the left side of the gradient, with its value of 0. When linearized, it's still 0. Now a random number is added to it, from -0.5 to 0.5. Let's say it's 0.5. Now the final linear color is 0.5, and it needs to be converted to sRGB. Converting to sRGB roughly gives us 0.7353569831, which will be quantized to pure white. This explains why the gradient looks so wrong up there, because converting to sRGB is increasing numbers dramatically.

So when I remove that third step from the code, and don't convert back to sRGB, the gradient looks fine. So it seems I have a solution, but I'm very confused as to why that's the case. I've put it in linear RGB, why do not I need to convert it back to sRGB? Wikipedia seems to suggest you would need to, but my code does not.

Thanks.

makew0rld commented 3 years ago

I've discovered that if I linearize the max random value (0.5 becomes 0.2140411405) and delinearize the min random value (-0.5 becomes -0.7353569831) then the gradient is correct when I convert back to sRGB. I'm totally confused why this works, and would appreciate any insight.

makew0rld commented 3 years ago

@KelSolaar @sobotka sorry for the tag, but I'm stumped.

sobotka commented 3 years ago

I’m not sure what exactly is stumping you here, as it is unclear over the past few posts.

Thomas has followed along, and his advice is more solid than 99.95% of what you’ll find online.

My sole thought is that dithering in RGB is ultimately radiometric-like in nature, assuming one can get the plot right. If it works at the pixel level, it should also scale appropriately spatially, but perhaps I am missing something? Are your linear RGB calculations properly taking into account the spatial area?

As for negatives, they are completely bunko regarding light emissions; even black holes would be zero reflectance, not negative. Granted, I haven’t been to a black hole...

makew0rld commented 3 years ago

@sobotka Sorry, I've been figuring things out a bit publicly, I understand why it's confusing.

What's stumping at this point is why using euclidean distance in sRGB seems to work well for color images (this comment), and worked for grayscale quantization as I showed in this comment, but failed when using random dithering, as I showed in this comment. Why is euclidean distance in sRGB not working in that case?

Are your linear RGB calculations properly taking into account the spatial area?

I'm not really sure how one would do this, I would appreciate more info. sRGB is being converted to linear RGB like this:

func linearize(v float64) float64 {
    if v <= 0.04045 {
        return v / 12.92
    }
    return math.Pow((v+0.055)/1.055, 2.4)
}

I'm then doing an operation with the output, like adding or subtracting. Then I'm converting back to sRGB:

func delinearize(v float64) float64 {
    if v <= 0.0031308 {
        return 12.92 * v
    }
    return 1.055*math.Pow(v, 1.0/2.4) - 0.055
}

For the grayscale images the RGB is converted into a single linear grayscale number, right after the conversion to linear RGB.

This is all defined by Wikipedia here.

As for negatives, they are completely bunko regarding light emissions; even black holes would be zero reflectance, not negative. Granted, I haven’t been to a black hole...

The negatives I was talking about were for the range of random values for dithering. So a random value between -x and x would be added to the linear grayscale value, before converting back to sRGB.

Thanks for the help.

sobotka commented 3 years ago

distance in sRGB seems to work well for color images

I don’t see anything “working well” there. Reduce your cases to less complex entry points? Can you demonstrate with two pixels, expanded, for example? Four? Etc.

and worked for grayscale quantization as I showed in this comment,

Quantisation is about an input and an output, and I’m not sure you’ve clearly identified your input X axis and your output Y axis clearly enough here. Certainly not clearly enough for me. It also might help to outline what exactly one’s expectation or “hope” is here.

I'm not really sure how one would do this, I would appreciate more info.

Think about a simple case of four pixels. Two diagonal pixels are off, at zero emission. The other two are set to full emission. What is the net sum relationship of the four pixels with respect to overall emission? Those sorts of questions are extremely relevant when discussing dithering; we can think of that staccato pattern as being a dither of a specific value. What is it? What is the interaction when we change that target value?

The negatives I was talking about were for the range of random values for dithering. So a random value between -x and x would be added to the linear grayscale value, before converting back to sRGB.

What does a negative sum end up meaning? Don’t get stuck in the math. Focus on the display hardware first, and work backwards, perhaps?

makew0rld commented 3 years ago

@sobotka

I don’t see anything “working well” there.

For the color images, when I matched to the palette using euclidean distance in sRGB vs linear RGB, the sRGB output appeared to match the original better. My understanding is that this is proven by the graph Thomas sent here.

It also might help to outline what exactly one’s expectation or “hope” is here.

My hope for grayscale quantization is that all the pixels in the image that are darker than the middle point become black, and all the pixels that are lighter than that middle point become white. That middle point would be what humans perceive as the middle between completely dark (black) and completely light (white).

Instead of simply thresholding a value, I'm trying to use euclidean distance to do this, so that the algorithm works well for palettes with more than just two colors.

Now the question remains as to what that middle point is, and what color space for euclidean distance will create the most accurate results. Using linear RGB will result in 0.7353569831 being considered the middle point, while sRGB will result in 0.5 being considered the middle point. But using sRGB also seems to have other issues as I described here.

Think about a simple case of four pixels. Two diagonal pixels are off, at zero emission. The other two are set to full emission. What is the net sum relationship of the four pixels with respect to overall emission?

In none of my examples have I had to sum any pixel values together. All my operations have applied to each pixel independently. I have talked about quantization and random dithering so far, and for both of them the example pixels you describe would remain the same, assuming the palette used contains a full emission and zero emission "color".

The negatives I was talking about were for the range of random values for dithering. So a random value between -x and x would be added to the linear grayscale value, before converting back to sRGB.

What does a negative sum end up meaning? Don’t get stuck in the math. Focus on the display hardware first, and work backwards, perhaps?

You're asking what would happen if a negative number was added to a linear grayscale value, resulting in a negative sum? That would be clamped to 0. I'm not sure what you mean by focusing on the display hardware, how should I go about that?

I hope that clears some things up, thanks for taking the time.

makew0rld commented 3 years ago

Now the question remains as to what that middle point is

Reading this Wikipedia article has helped me understand the different possible middle values more:

image

When I look back at this comment, it's obvious that using sRGB for palette matching is leading to a middle value of 128 (or 127) as the table above shows. And using linear RGB is equivalent to the "Absolute whiteness" middle (note the 1.0 gamma value), which is 188.

If we consider 50% CIELAB brightness to be the correct human perception middle gray, then it's obvious that sRGB should be used to palette match over linear RGB, as the middle gray of sRGB is much closer to CIELAB.

But as I figured out in this comment and the one below, it's adding randomness that's the problem. Adding a random number from -0.5 to 0.5 to every pixel results in the image becoming way too bright, because the 0.5 value represents the middle in linear RGB, not sRGB. Now it would be nice if the random number could just be added to the sRGB value directly, but my understanding is that that would produce inaccurate results because of its non-linearity.

So instead I tried to offset the brightness bias, as I described here:

I've discovered that if I linearize the max random value (0.5 becomes 0.2140411405) and delinearize the min random value (-0.5 becomes -0.7353569831) then the gradient is correct when I convert back to sRGB.

This worked for the random dithering, producing a correct-looking gradient like this:

random_noise_grayscale

But then I wondered, can I generalize this to other types of dithering like Bayer? So I started using this algorithm:

func fixLinearValue(v float32) float32 {
    if v < 0 {
        return -delinearize(-v)
    }
    return linearize(v)
}

Whenever some number is added to a grayscale value, fixLinearValue would be applied to the number before it was added.

And here's the output with a bayer level 3 matrix.

With linear RGB palette matching:

bayer3_gradient

With non-fixed sRGB palette matching - note how it's too bright:

bayer3_gradient

With fixed sRGB palette matching:

bayer3_gradient

Original gradient for reference:

gradient

While the fixed sRGB gradient isn't that smooth, the colors seem much more accurate. The linear RGB gets too dark, too fast. All of this can be seen more obviously when the gradients are scaled down:

Original gradient:

Screenshot_2021-01-20_11-34-50

Linear RGB:

Screenshot_2021-01-20_11-35-06

Fixed sRGB:

Screenshot_2021-01-20_11-35-57

I'm sorry, I know I'm basically writing a blog post at this point. And I plan to turn this into one! I guess I've been putting stuff here as a sanity check, because I'm new to color theory, and I want to make sure I'm not totally off base with this. So does what I'm doing make sense? Have I discovered something new, or has this been written about before? Thank you.

Edit: I don't want to cloud up this thread anymore than I already have, but after some more testing, it looks like the "fixing" works well for full color images too.

makew0rld commented 3 years ago

I haven't touched on error diffusion dithering yet. None of this fixing can really be applied to that, I couldn't find a way that works, likely because the error diffusion matrices are designed in a specific way, rather than it being a random number.

makew0rld commented 3 years ago

After testing with CIE Delta E for palette matching as well, it appears like the general algorithm for fixing a value (called i) is as follows. Note that i represents a positive or negative number being added to a linear RGB value during a dithering operation.

If i is positive, interpret it as if it's a brightness/grayscale value in the palette matching color space. Then convert that color into CIEXYZ, and get the luminance value by taking just the Y. Return Y.

For example, if the palette matching color space is sRGB, treat i as an sRGB color of the form (i, i, i). Then convert that to CIEXYZ, etc. Or if the palette matching color space is CIELAB, treat i as the L, and the a and b as 0.

This is equivalent to converting the value to linear RGB, and sometimes doing that can be a shortcut instead of converting to CIEXYZ. For example, if i is 0.5 and the palette matching color space is sRGB, then 0.5 can just be linearized.

This conversion to CIEXYZ / linear RGB is used to find the equivalent value in the palette matching color space, then converted back to linear RGB so it can actually be used. For example, if the i is 0.5, then the algorithm will find the middle of the palette matching color space instead, and then convert that back into linear RGB so it can be used.

If i is negative, make it positive, then treat it as a linear RGB value (which it is), in the form (i, i, i). Convert it to the palette matching color space, and get the luminance/brightness/grayscale value in that color space, so there's just a single number. Return that number, as a negative.

This gets the actual intended value of the number in the palette matching color space. For example if i was -0.5 and the palette matching color space is sRGB, it will be converted to roughly -0.73.

The end result of this algorithm is that positive numbers are used to match the palette matching color space perceptually (where 0.5 is the middle of that color space), but negative numbers match the color space numerically - they are actually converted correctly, but then still used linearly.

For CIE Delta E, I pretended the palette matching color space was CIELAB, as that's the closest approximation. I'm not sure how to go about getting more accurate than that.

The purpose of this fixing is to allow dithering operations to be independent of the palette matching algorithm. Applying random numbers and bayer matrices only works in linear RGB, but when the result is converted to another color space to try and find the closest palette color, it becomes too bright, because the linear RGB middle gray is higher than other color spaces (and not relevant to human vision). I believe the algorithm I described above will correct for this brightness increase. Does it make sense?

sobotka commented 3 years ago

It really feels like over complication here before the basics are well covered?

Given four pixels in a 2x2 array, how to dither the increments? What is the ground truth?

makew0rld commented 3 years ago

I understand I wrote a lot, but did you read any of it?

Given four pixels in a 2x2 array, how to dither the increments? What is the ground truth?

I'm not sure what ground truth means. Depending on what method is being used, different modifications will be made to those four pixels, in linear RGB space, and then the final clamped value of each one will changed to a palette color, using another algorithm.

So for example, if random dithering is used, for each pixel, a random number between the defined min and max will be picked and added to the pixel's color value. Then the resulting color will be corrected to the nearest palette color.

This thread originally started by talking about whether using euclidean distance in sRGB or linear RGB was better for finding the nearest color. But in my most recent comments, I've been trying to find out what the best min and max value are instead. If we want the middle gray to be 50% black and 50% white (for a black and white palette), then how do we modify the inuitive min and max values of -0.5 and 0.5 to work around the middle gray of the color distance algorithm? That's what I've described above in earlier comments.

sobotka commented 3 years ago

There is only one ground truth. Again, think about two pixels at 100% and two at 0%. What does the percentage represent? What would the “dithered” result represent spatially?

That answer might bring some clarity as to appropriate math and appropriate models?

makew0rld commented 3 years ago

There is only one ground truth.

I don't understand the term "ground truth" in this context. Could you explain it?

Again, think about two pixels at 100% and two at 0%. What does the percentage represent?

In my case, the linear RGB grayscale value. Often this would translated as (0, 0, 0) and (255, 255, 255).

the “dithered” result

If the palette contains the "colors" of 100% and 0%, then the dithering won't do anything, the result will be the same.

sobotka commented 3 years ago

What does 100% mean?

makew0rld commented 3 years ago

It means the brightest possible color the hardware and software can accommodate. In my code I am limited to describing it as (255, 255, 255) and calling it a day.

sobotka commented 3 years ago

So if it is a percentage of display emission, you should be able to dither those 2x2 pixels to any step increment using spatial increments, correct?

And if you were to try that, what is the sole model that delivers the proper emission levels?

makew0rld commented 3 years ago

I believe so.

If I'm trying to match display emission, I guess the correct model for the dithering incrementation would be sRGB, because that's the usual display emission model? I thought that because it's non-linear you shouldn't really do operations in that space. Would be interested in hearing whether that's incorrect.

sobotka commented 3 years ago

It is all relative to display linear emissions. The encoding variation might be useful to work backwards, but ultimately it is all light emission directly relative to the display, and it can be considered a mostly physical problem.

makew0rld commented 3 years ago

So what does that mean for the question that began up here?

makew0rld commented 3 years ago

I think I've gotten very confused because Firefox (my main browser) is displaying these images differently than Chromium, and Firefox is likely getting it wrong. So maybe the gradients I thought were bad are actually good, etc.

If anyone knows about this, I'd appreciate if they could take a look at this question I filed: https://superuser.com/questions/1620043/why-does-this-png-display-differently-in-firefox-vs-chrome

makew0rld commented 3 years ago

I'm sorry for wasting everyone's time. Due to an issue with my Firefox configuration, I wasn't seeing these images at 100%, which distorted their brightness, and led me to all kinds of crazy conclusions, trying to solve the problems I created. Pretty much all the conclusions I've made in this thread have been false or misguided.

However, I remain unconvinced that sRGB is better than linear RGB for dithering. When I take a look back at this comment, now that I can actually see the images properly, linear RGB looks better every time.

@KelSolaar mentioned in response to that comment:

Looking a both of them [the dithered gradients] while squinting with my eyes to "blur" the signal, none look particularly great :) Which one looks closer to the reference gradient for you?

The second one now very clearly looks better to me. Here they are again for reference:

Dithered gradient in sRGB:

dithered gradient in sRGB

Linear RGB:

dithered gradient in linear RGB

Source for those images: https://surma.dev/things/ditherpunk/

KelSolaar commented 3 years ago

Did it occurred to you that your preference WRT matching could be a display calibration issue?

Here is a gamma test: Generate a series of x10 96x96 checkerboard of decreasing luminance, each of them have the light colour halved from the previous one. Center those checkerboards against a 192x192 constant background with half the luminance of their respective checkerboard.

Encode that image with sRGB EOTF-1, then display it at 100%, if your display gamma is correct you should not see the checkerboards when slightly squinting, alternatively, test a Gamma 2.2 encoding. If none works, then your display is off chart!

The takeaway is that 1) it is important to make sure that your display is calibrated, 2) as you remove values, you remove the total energy arriving to your eyes so you need to increase luminance of dithered pixels to make things appear similar.

sobotka commented 3 years ago

you remove the total energy arriving to your eyes so you need to increase luminance to make things appear similar.

I would add that what Thomas has said here as perceptual facet of “luminance” is also purely calculated from emission level; it is a direct spatial to emission value.

The second one now very clearly looks better to me. Here they are again for reference:

See above! You can calculate the overall emission level spatially and compare against the target value! It also helps to illustrate the spatial component here.

IE: For a given pixel density there is no guesswork; sample the pixel spatial region, figure out the emission, compare against the target. Which is closer to the target? Solved.

KelSolaar commented 3 years ago

The second one now very clearly looks better to me. Here they are again for reference:

Looking at it on a display at 100%, I concur!

I attached the two aforementioned charts:

EOTF__SDR_sRGB__HD1080

EOTF__SDR_Gamma_2 2__HD1080

makew0rld commented 3 years ago

The second one now very clearly looks better to me. Here they are again for reference:

Looking at it on a display at 100%, I concur!

@KelSolaar Glad to hear it! And thanks for the charts.

If the second one is better, wouldn't that suggest that using linear RGB distance for dithering makes more sense? When you look at the color images (at 100%) in this comment it looks like linear RGB is better for those too. I ask this because one of your earlier comments up here suggested that sRGB distance was better for human perception, but these images appear to contradict that.

KelSolaar commented 3 years ago

I ask this because one of your earlier comments up here suggested that sRGB distance was better for human perception, but these images appear to contradict that.

This comment was made in reference to your OP.

The problem in the dithering case is different though. You can factor out the observer because what you are interested in here is basically energy conservation . The idea being that for a given pool of radiant power emitters, if you remove a certain numbers of them, by how much the radiant power of the remaining ones must be increased to be the same than that of the full pool. It is really a ratio and doing those operations in a linear space is totally appropriate!

makew0rld commented 3 years ago

That's very interesting, I hadn't thought of it in terms of energy conservation. Thanks! I will stick to linear RGB from now on.

I suppose that answers any questions about doing dithering in other color spaces like CIELAB. Because those color spaces are not linear in terms of radiant power. Right?

KelSolaar commented 3 years ago

Yes, they all model our perceptual response to luminance, so they would not be great for operations where you must manipulate energy quantities. A typical example is rendering, solving the rendering equation must be done in a linear space, especially with indirect rendering where you simulate light bouncing in a scene.

makew0rld commented 3 years ago

Closing this, now that everything is wrapped up. Thanks again for all the help, and sorry for the confusion.