postech-ami / Paint-it

[CVPR'24] Official PyTorch Implementation of "Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering"
https://kim-youwang.github.io/paint-it
MIT License
199 stars 6 forks source link

how I can get the normal texture with the ks,kd #12

Closed ghost closed 7 months ago

ghost commented 7 months ago

When you import the mesh and the generated texture maps in Blender, you have to define the same shading pipeline as NVDiffrast (that we used to render & train our texture).

You can refer to this https://github.com/postech-ami/Paint-it/issues/5#issuecomment-2039579293. Here, you can find the Blender Python script to import your mesh and texture, similar to the NVDiffrast.

Please let us know if you have more questions. Thanks.

Originally posted by @Youwang-Kim in https://github.com/postech-ami/Paint-it/issues/7#issuecomment-2048916325

Youwang-Kim commented 7 months ago

Hi, thanks for your interest in our work.

Our code automatically saves the Kd, Ks, and Normal map in the log directory. Please refer to the below code: https://github.com/postech-ami/Paint-it/blob/407c55b4255dd8e13fd950313ea860cf0ffe71e2/paint_it.py#L252-L263

The write_obj(output_dir, vis_mesh) line will save all the texture maps that you need, including the normal map.

Please let us know if you have more questions.

Thanks.

Youwang-Kim commented 7 months ago

Please note that the saved texture maps are in UV coordinates. So, the texture map images themselves should look different from the images of the meshes.

Youwang-Kim commented 7 months ago

Which rendering engine did you use? During generation, we use NVIDIA's differentiable rasterizer and its shader. So if you're using another renderer or shader, you must build the same rendering pipeline as our shading pipeline.

For example, if you're using Blender, you can refer to the scripts mentioned here: https://github.com/postech-ami/Paint-it/issues/5#issuecomment-2039579293

But still, please note that the rendering pipelines for NVDiffrast and Blender are different, so they may not look identical.

Youwang-Kim commented 7 months ago

Hi, did you run the code with the default parameters that we suggested? I just run the code with the default setting, and I got result as below: iTerm2 rCU0TE final_top

Also, since our method uses Score-Distillation Sampling (SDS), which involves the random sampling of the diffusion time steps, your result may vary from the one in our paper. However, it shouldn't be as different as you showed.

Please check your argument or other settings.

Youwang-Kim commented 7 months ago

Getting 1.000 for SDS loss is normal behavior.

If you look at the code implementation of SDS, it computes the custom gradient since, according to the original DreamFusion paper, we need to omit the diffusion U-Net Jacobian. The forward pass just outputs the dummy value of 1.0. Note that SDS computes the backward gradient, not the exact loss value. Please refer to the below code:

https://github.com/postech-ami/Paint-it/blob/407c55b4255dd8e13fd950313ea860cf0ffe71e2/sd.py#L16-L22

Regarding the visual quality, it is weird since we consistently obtain better results than the one you showed. Which GPU are you using? Are you using 4 rendered views per iteration? Could you please try with different random seeds?

Youwang-Kim commented 7 months ago

We have tested on NVIDIA RTX A6000 and A100 machines, and got good results for the pretzel example. Could you please try with other machines?

ghost commented 7 months ago

Hi, if I first blend in the Zero123++ image to multi-view solution to go replace the current text to image, Is it feasible?What should I change?

Youwang-Kim commented 7 months ago

Hi, we just newly downloaded the code and have run the code with the default settings. And the result seems good, as we got before.

iTerm2 U4GoY2 final_top

Could you please check if you have installed the libraries with the correct versions? Please check if you have the same versions of the libraries or CUDA as below:

conda create -n paint_it python=3.8
conda activate paint_it

# pytorch installation
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113

# for pytorch3d installation
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
# for python3.8, cuda 11.3, pytorch 1.12 (py38_cu113_pyt1120) -> need to install pytorch3d-0.7.2 
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1120/download.html

pip install git+https://github.com/NVlabs/nvdiffrast/
pip install diffusers==0.12.1 huggingface-hub==0.11.1 transformers==4.21.1 sentence-transformers==2.2.2
pip install PyOpenGL PyOpenGL_accelerate accelerate rich ninja scipy trimesh imageio matplotlib chumpy opencv-python
pip install numpy==1.23.1