maximeraafat / BlenderNeRF

Easy NeRF synthetic dataset creation within Blender
MIT License
769 stars 47 forks source link

The color of the object in the generated image is different. #31

Closed wangjinhoon closed 7 months ago

wangjinhoon commented 7 months ago

Hello, I'm new to Blender....i used the object below to execute cos, but the image turns out as shown below. What did I do wrong Screenshot from 2024-02-13 15-51-02

Below is the generated image. 0001 0002

Sorry for the many questions. It seems like the cosine method retrieves images in random order, but can I modify the code to receive sequence data from the starting point

It seems like the SOF method behaves similarly to what I want, right? And does this feature include rendering? Can it be removed

thank your reply!

maximeraafat commented 7 months ago

Hi @wangjinhoon, thanks for your question. A rendered object is pink when the textures for the corresponding mesh are missing. If you head to the shader editor, you should be able to change the path to the texture images for the materials in question. If you're struggling with this, just google Blender missing textures, you'll find plenty of solutions.

As for your second question, the COS method (which stands for Camera on Sphere, not cosine 😉) renders images randomly sampled from a sphere. If you'd rather render images in a sequential motion, I'd suggest setting up a camera that follows a curve or is attached (parented) to an empty object. You should find many resources and tutorials to set up such a system online as well.

Note that all three SOF, TTC and COS methods per default render the camera views for the selected cameras and frames (details in the README). You can disable rendering by unticking the Render Frames button in the add-on panel.

Hope this helps!

wangjinhoon commented 7 months ago

Thank you for always kindly guiding me. The file I attached above generated training images very quickly. However, when I ran COS with other objects containing textures, the speed of generating training images significantly slowed down. Is there a way to speed up this process?

maximeraafat commented 7 months ago

Rendering speed is dependent on many things, render engine (Cycles vs EEVEE), polygon count (how many objects are in your scene, and how many vertices per object), the complexity of your shaders/materials, lighting environment, etc. But in most cases, the main reason for drastic changes in performance is the render engine. If you are using Cycles, you might want to fall back to EEVEE for faster rendering.

EEVEE is designed as a real-time render engine, it approximates the rendering equation and global illumination (how the light in your environment interacts with your scene) using a technique called rasterization (similar to what many video games do). With EEVEE, you should have render times not exceeding a few seconds, perhaps a minute for large resolutions.

Cycles on the other hand simulates global illumination much more accurately using a physics-based method called path tracing. It results in photorealistic lighting at the cost of much slower rendering.

There are many more tricks that you can use in Blender to optimise rendering performance though, both for EEVEE and Cycles, feel free to have a look online. Hope this helps!

wangjinhoon commented 7 months ago

hello @maximeraafat

Exactly what I wanted. Thank you for explaining so kindly :)