CodingTrain / Suggestion-Box

A repo to track ideas for topics
572 stars 86 forks source link

Real - Time lighting via 18% grey sphere #71

Open booomji opened 8 years ago

booomji commented 8 years ago

Hello, I'm from the Visual Effects side of things so I am hoping you can cover (or uncover) how to extract lighting information from a Grey sphere in Real - Time : It does not have to be very accurate.

I am developing for Microsoft's Hololens. https://www.microsoft.com/microsoft-hololens/en-us

It would be terrific if I could sample the lighting from a real environment (via a Reflection and a grey probe) to be able to integrate CG objects better into their surroundings.

Have a read of the excellent article from the makers of Unreal Engine to understand how Light & reflections are captured for use in VFX & Game engines: https://www.unrealengine.com/blog/updated-lighting-in-templates

Can you create a script / algorithm that can :

  1. capture diffuse lighting and direction from a grey matte finish sphere in real time.
  2. It does not have to be accurate. Any progress via automation is a win for the Mixed Reality industry.

Do let me know If I have not been very clear in explaining what I'm going after. Cheers, behram

Dan-Ritchie commented 8 years ago

It sounds a lot like you want to make something like a light probe image. http://ict.debevec.org/~debevec/HDRShop/tutorial/tutorial5.html Basically they are a spherically mapped image used to light a scene from a real-world captured image. Typically they are used with global illumination sampling algorithms that take multiple samples in random directions in the hemisphere of a surface normal. The samples are accumulated, and the sum of the accumulated samples is the color of light for the surface. It might be multiplied by a surface texture. Since sampling in random directions is a lot like making a blur convolution, you could just use a very blurry version of your light probe image, and take a single sample from the ray reflection vector. http://paulbourke.net/geometry/reflected/

Dan-Ritchie commented 8 years ago

Now that I think about it a little longer, a gray ball, like a rubber ball, would have a very lambert (completely scattered light) quality. In which case, sampling from the normal vector would be a reasonable simplification of random sampling within the hemisphere of the normal. So, I think, forget about the reflection vector. All we really need to do is convert the surface normal to a point on the 2D ball image. Now, this sounds reasonable in my head... Recall that the surface normal on a sphere is the same as a sphere's geometry. That is, a line pointing from the center of a spherer to a vertex on a sphere surface would have the same direction as the surface normal. Or, if you plot all the possible directions of a surface normal, it would make a complete sphere. A unit normal has a "distance" ranging between -1 and 1 on each axis. Let's make the normal and the 2D image be the same size. Lets say the image size is in the range of -1 to 1 on each axis. Since the image is 2D and our surface normal is 3D, let's drop the z axis of the normal. Now if we compare our surface normal to our image, it maps exactly to points within the grayball image. Just take the x,y values of the normal and directly map them to the normalized image.

I got to try this out.

booomji commented 8 years ago

Wow ! thank you so much for the explanations Dan. There is so much to take in here.

Really appreciate the help and knowledge you are sharing with the Community.

If you make it work you can sell it as a plugin for augmented reality ( Pokemon Go ! ) applications.

Cheers, behram On 16-Jul-2016 4:32 am, "Dan-Ritchie" notifications@github.com wrote:

Now that I think about it a little longer, a gray ball, like a rubber ball, would have a very lambert (completely scattered light) quality. In which-D ball case, sampling from the normal vector would be a reasonable simplification of random sampling within the hemisphere of the normal. So, I think, forget about the reflection vector. All we really need to do is convert the surface normal to a point on the 2D ball image. Now, this sounds reasonable in my head... Recall that the surface normal on a sphere is the same as a sphere's geometry. That is, a line pointing from the center to a vertice on a sphere surface. Or, if you plot all the possible directions of a surface normal, it would make a sphere. A unit normal has a distance between -1 and 1 on each axis. Let's make the normal and the 2D image be the same size. Lets say the image size is in the range of -1 to 1 on each axis. Since the image is 2D and our surface normal is 3D, let's drop the z axis of the normal. Now if we compare our surace normal to our image, it maps exactly to points within the grayball image. Just take the x,y values of the normal and directly map them to the normalized image.

I got to try this out.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/CodingRainbow/Rainbow-Topics/issues/71#issuecomment-233088842, or mute the thread https://github.com/notifications/unsubscribe-auth/AIbBSTXq4LXVpYAWT538EuPGX9WWcEqQks5qWBGAgaJpZM4JIi4O .