RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.18k stars 1.24k forks source link

Configurable lighting in rendering engines #12593

Open tehbelinda opened 4 years ago

tehbelinda commented 4 years ago

Currently there is no proper way to configure the lighting in our rendering engines. A temporary method to configure the position of the single default light is in #12440 but is not intended for public usage. Things that we could configure include:

Available options for lights in VTK-based render engines can be seen in https://vtk.org/doc/nightly/html/classvtkLight.html

Render Engine API should be generic enough to handle different implementations of Lights, e.g. not all rendering systems support area lights

cc @SeanCurtis-TRI

SeanCurtis-TRI commented 4 years ago

Probably want to consider environment maps as well (something that has light emitting properties for path tracing, and reflective effect for reflective surfaces.)

When you get to types of lights, you have additional things to consider:

For path tracing, you want to consider area lights and then figure out what to do if someone requests an area light for a renderer doesn't support area lights (approximate with multiple lights, substitute with a single light, throw).

Configuring shadows will be an issue as well. For any rasterization pipeline, there will be levers for controlling the shadow (via shadow maps) that will affect the final images. Depending on the algorithms, some of these values can be inferred from the scene and some can't. It will be important to provide those levers.

adamconkey commented 1 year ago

Has any progress been made on this? Or do you have a workaround, or a hook into a better rendering engine? I was surprised to find that in spite of all the lights I added in the Meshcat scene, none of it was being picked up by my RgbdSensor. This is problematic for me since the lighting conditions are something I need to vary to evaluate a perception system.

jwnimmer-tri commented 1 year ago

FYI The (currently incomplete) https://github.com/RobotLocomotion/drake-blender will be one way to deal with this.

Drake's glTF render engine (see MakeRenderEngineGltfClient) will dump the perception-role SceneGraph to a glTF string and http POST it to a server. The server then takes a still frame of the glTF and returns it as a PNG file, which will appear on the RgbdSensor output port.

There isn't necessarily any configurable lighting information transmitted in the glTF (because SceneGraph doesn't know about lighting yet), but on the server side the user can customize their renderer to add lighting or whatever else they please, beyond the glTF data. For example, the blender server can load a *.blend file in addition to the glTF.

Users who write their own server could also customize it to their liking.


To be clear, we should still consider whether SceneGraph should grow more configuration for these details. But in the meantime, the glTF http call was designed to make it as easy as possible for users to mate their own rendering implementation into Drake.