[CVPR'24] Official PyTorch Implementation of "Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering"
First of all, thank you for the great work on this project. I am currently working on texture generation as part of my research at HSE University (BaesGroup) and am trying to compute the benchmark results from Text2Tex paper.
However, I am running into some difficulties when attempting to render the GLB files required for the benchmark. Specifically, I am struggling with generating the necessary renderings from 20 different views.
Would it be possible for you to provide either:
1) The renderings from 20 views, or
2) The code or instructions for rendering these images from the original glb file
Any help or guidance would be greatly appreciated, as this would significantly aid my research.
Thank you in advance for your time and assistance!
Hello,
First of all, thank you for the great work on this project. I am currently working on texture generation as part of my research at HSE University (BaesGroup) and am trying to compute the benchmark results from Text2Tex paper.
However, I am running into some difficulties when attempting to render the GLB files required for the benchmark. Specifically, I am struggling with generating the necessary renderings from 20 different views.
Would it be possible for you to provide either: 1) The renderings from 20 views, or 2) The code or instructions for rendering these images from the original glb file Any help or guidance would be greatly appreciated, as this would significantly aid my research.
Thank you in advance for your time and assistance!