Closed zhoushiwei closed 2 weeks ago
Could you please tell me how much time is spent on pre-rendering in Blender and generating materials, respectively?
Could you please tell me how much time is spent on pre-rendering in Blender and generating materials, respectively?
it seem pre-rendering spend about half of hour, and I see load the image is very slow
by the way ,Right now I'm using the dog model to generate textures and it seems to work poorly, but your sample ones work well, could this be directly related to the number of vertices in the mesh?
During the pre-rendering phase, there are a large number of read and write operations, which are quite related to CPU performance (15min on i9-14900k). If the process appears to be too slow, increasing the number of threads could be a solution. The material generation phase should not be excessively time-consuming on an A100, as our tests on a V100 didn't exceed 30 minutes.
The generated appearance is indeed influenced by the quality of the mesh itself. Insufficient face count can lead to a lack of details. In the paper, we presented two dog models dog_models.zip with face counts of 21,606 and 4,316 respectively. Their results are as follows:
You can generate these by running the following command:
python launch.py --config configs/dreammat.yaml --train --gradio --gpu 0 system.prompt_processor.prompt="An adorable bulldog puppy" system.geometry.shape_init=mesh:load/shapes/objs/bulldog.obj trainer.max_steps=3000 system.geometry.shape_init_params=1.0 data.blender_generate=true
python launch.py --config configs/dreammat.yaml --train --gradio --gpu 0 system.prompt_processor.prompt="A shiba dog has a black collar and a light brown fur" system.geometry.shape_init=mesh:load/shapes/objs/shibadog.obj trainer.max_steps=3000 system.geometry.shape_init_params=1.0 data.blender_generate=true
Thanks for the reply, now I found another problem, that is, the number of vertices and texture of the generated mesh will be inconsistent with the input mesh, is there a solution for this problem?How do I keep the mesh exactly the same as the the input mesh?
Thanks for the reply, now I found another problem, that is, the number of vertices and texture of the generated mesh will be inconsistent with the input mesh, is there a solution for this problem?How do I keep the mesh exactly the same as the the input mesh?
The mesh is loaded by trimesh and preprocessed in the method trimesh.load
. But I found a solution in https://github.com/mikedh/trimesh/issues/2154. Maybe you can try that by adding the parameters in dreammat_mesh.py, Line 150.
And the uv is flipped in the exported mesh. So if you want to align them, it should be handled in the method export_obj_with_mtl
in mesh_exporter.py to flip the uv and the exported texture
right now my reconstructed scene is a texture reconstruction for pets, if I retrain ControlNet to only use data related to pets, do you think it will work better than yours controlnet
right now my reconstructed scene is a texture reconstruction for pets, if I retrain ControlNet to only use data related to pets, do you think it will work better than yours controlnet
I haven't tried training exclusively on a single category of objects, but the quality of ControlNet is indeed related to the text prompt and the quality of 3D meshes. It's worth mentioning that if you are training on pets, you can set the metallic
directly to 0 to avoid errors in areas where the fur color changes significantly.
I dont'know why ?