TencentARC / InstantMesh

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Apache License 2.0
3.36k stars 363 forks source link

Issue about the rendering of filtered objaverse #13

Open wenqsun opened 7 months ago

wenqsun commented 7 months ago

Thanks for your great work!

I wonder how you select the range of elevations and azimuths when rendering the filtered objaverse. This may influence the performance of the trained model.

I am looking forward to your reply!

kunalkathare commented 7 months ago

Hi @wenqsun, can you please provide some guidance on how to set up the dataset properly?

wenqsun commented 7 months ago

Oh, I am just thinking about this issue. The elevations and azimuths of training data may influence the inference performance. For example, in this work, they use zero123++ to generate multi-view images, and when I use other multi-view diffusion models with different elevations and azimuths, the generated mesh somehow degrades.

Thus, I wonder how the authors select the range of camera pose for the training dataset, which may be related to the multi-view diffusion models you use during the inference stage. This will help me redesign the rendering strategy.

bluestyle97 commented 7 months ago

@wenqsun @kunalkathare We render our dataset following the setting of LRM. To be specific, each 3D model is normalized into a unit cube in the range of [-1, 1]. We render 32 views for each model, with camera distances in the range of [2.0, 3.5], camera heights (z axis) in the range of [-0.75, 1.6], fov=50.

Our reconstruction model is trained with free-viewpoint images, so its performance should be irrelevant to which multi-view diffusion model we use. Since you mentioned that "when I use other multi-view diffusion models with different elevations and azimuths, the generated mesh somehow degrades", could you please clarify which multi-view diffusion model are you using? Also, how did you set the corresponding input camera poses?

kunalkathare commented 7 months ago

Hi @bluestyle97, I have used the blender script from LRM, now I have a folder with 32 poses, a folder with 32 rendered images and an intrinsic.npy file for each object. Now I have created folders in the following order: Instant mesh

kunalkathare commented 7 months ago

Hi @bluestyle97, I have used the blender script from LRM, now I have a folder with 32 poses, a folder with 32 rendered images and an intrinsic.npy file for each object. Now I have created folders in the following order: Instant mesh

  • data
    • objaverse
      • input_image_folder
      • target_image_folder
      • filtered_obj_name.json
    • valid_samples Can you please tell me exactly what to put in these folders, and the format of the json file?

Hi @JustinPack , can you please provide some guidance or can you please share the script file used to render objaverse.

charchit7 commented 6 months ago

Hey @kunalkathare by LRM you means OpenLRM, right?

kunalkathare commented 6 months ago

Hey @kunalkathare by LRM you means OpenLRM, right?

Yes

rfeinman commented 6 months ago

Hi @bluestyle97 - I had a look at the OpenLRM blender script but it does not include depth maps or normals. I have been using the following additional bpy settings to render normal maps but they do not seem quite in line with what your InstantMesh model expects. Am I doing something wrong? This is done with Blender v4.0.0.


nodes = bpy.context.scene.node_tree.nodes
links = bpy.context.scene.node_tree.links
nodes.clear()
render_layers = nodes.new("CompositorNodeRLayers")

bpy.context.view_layer.use_pass_normal = True

# rescale [-1, 1] * 0.5 + 0.5 to [0, 1]
node_normal_scale = nodes.new(type="CompositorNodeMixRGB")
node_normal_scale.blend_type = "MULTIPLY"
node_normal_scale.inputs[2].default_value = (0.5, 0.5, 0.5, 1)
links.new(render_layers.outputs["Normal"], node_normal_scale.inputs[1])
node_normal_bias = nodes.new(type="CompositorNodeMixRGB")
node_normal_bias.blend_type = "ADD"
node_normal_bias.inputs[2].default_value = (0.5, 0.5, 0.5, 0)
links.new(node_normal_scale.outputs[0], node_normal_bias.inputs[1])

# save normals as png image
node_normal = nodes.new(type="CompositorNodeOutputFile")
node_normal.label = "Normal Output"
node_normal.base_path = "/"
node_normal.file_slots[0].use_node_format = True
node_normal.format.file_format = "PNG"
node_normal.format.color_mode = "RGBA"
links.new(node_normal_bias.outputs[0], node_normal.inputs[0])
Mrguanglei commented 5 months ago

@kunalkathare hello, I also have the LRM dataset, which is also rendered in blender. How can I use the training on instantmesh? Thank you very much for your help

abrar-khan-368 commented 4 months ago

Does any one of you know how to prepare the data for training? I referred openLRM blender script, will it be the same in the case of instant mesh?