yifanlu0227 / ChatSim

[CVPR2024 Highlight] Editable Scene Simulation for Autonomous Driving via LLM-Agent Collaboration
https://yifanlu0227.github.io/ChatSim
308 stars 20 forks source link

Is there any tutorial about the foreground blender rendering #32

Open TurtleZhong opened 3 months ago

TurtleZhong commented 3 months ago

Hi, Thanks for the amazing work! I am very curious about foreground rendering. I would like to ask if there are more materials and tutorials about blender rendering, or a standalone foreground rendering pipeline. Thanks in advance!

yifanlu0227 commented 3 months ago

Hi!

You can use the foreground rendering independently and we have updated some README of it. Please cd chatsim/foreground/Blender/utils and check chatsim/foreground/Blender/utils/README.md. The installation is no longer needed if you have installed ChatSim.

yifanlu0227 commented 3 months ago

As far as I know, there is no systematic tutorial for blender with python, but the official API document We've also been implementing this part step by step to make it work. A good practice is to have a PC with display, and run the command without -b. This will reproduce everything with a blender GUI.

You can refer to official blender documentation, the blender community, StackOverflow and ChatGPT for help, and we'd be happy to answer any questions you might have when running our code.

TurtleZhong commented 3 months ago

Hi!

You can use the foreground rendering independently and we have updated some README of it. Please cd chatsim/foreground/Blender/utils and check chatsim/foreground/Blender/utils/README.md. The installation is no longer needed if you have installed ChatSim.

Hi, follow the latest README in chatsim/foreground/Blender/utils/README.md, I can use blender --python blender_utils/main_multicar.py -- config/Wide_angle_test/ -- 0 -- 1 to open the blender UI, and I change the position of one car, rendering and I got 2 pictures like: Image0001 vehicle_and_shadow_over_background0001

TurtleZhong commented 3 months ago

Hi I have a new question about the blender rendering progress. I found the depth data is also needed in chatsim since I found the code here, also in your readme file depth: The depth of background. np.ndarray with dtype=float32, shape=[H, W]. Can be rendered results from NeRF. so I would like to know if depth information is necessary and what role it plays in the rendering process of blender?( may be depth check, Occlusion Detection). If there is no depth information, can the rendering process proceed normally?

yifanlu0227 commented 3 months ago

Hi, there are two depth in our framework. one is background depth, and another is foreground depth.

foreground depth is a product of blender rendering. When you set depth_and_occlusion to true in the yaml, blender will generate the depth of the 3d assets in the current viewport. More specifically, they are the output of z-pass. They are stored like depth/vehicle_and_plane0001.exr, you can read .exr files with tev.

background depth are what we suppose users provide to realize occlusion-aware composition. If it's accurate enough and depth_and_occlusion is true, we do a depth test with the foreground depth, realizing the rendering effect of having the 3d assets occluded by something in the background. See the code here. If depth_and_occlusion is false, we simply paste the rendered foreground object on the background image. We also simplify this process by setting the background depth to infinity (e.g. scene_file: assets/scene-demo-1137/0.npz contains very large depth). After all, it is challenging to get an accurate background depth.

Come back to your question:

  1. Depth information is not necessary if you don't need occlusion. You can set depth_and_occlusion to false, or provide an infinite depth, both of which make the foreground always above the background.
  2. Depth information are used for occlusion-aware composition
  3. If no depth information is provided, you can still render normally.

See more discussion in issue27 and issue13.