fudan-zvg / PGC-3D

[ICLR 2024] Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping
Apache License 2.0
77 stars 2 forks source link

Hi ,the result is wrong #5

Open zhoushiwei opened 6 months ago

zhoushiwei commented 6 months ago

I just run python main.py -O0 -O2 -O4 --pbr --guidance "controlnet" --text "A panda is dressed in armor, holding a spear in one hand and a shield in the other, realistic" --workspace panda/pbr --gpus "0" --mesh_path "data/panda_spear_shield.obj"

but the result is this

fine_df_ep0003_0001_rgb

mdarhdarz commented 6 months ago

Hi, I just check this command and find it works well on my machine. Please check following information. Result: fine_df_ep0003_0001_rgb Parameters: option Checkpoint: https://drive.google.com/file/d/1KKmTsEx-AAF_xVVhsxro16Vs-9ERpylM/view?usp=drive_link

The whole environment: Name Version Build Channel _libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 2.0.0 pypi_0 pypi accelerate 0.25.0 pypi_0 pypi addict 2.4.0 pypi_0 pypi aiofiles 23.2.1 pypi_0 pypi aiohttp 3.9.1 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi annotated-types 0.6.0 pypi_0 pypi ansi2html 1.9.1 pypi_0 pypi antlr4-python3-runtime 4.9.3 pypi_0 pypi anyio 3.7.1 pypi_0 pypi asttokens 2.4.1 pypi_0 pypi async-timeout 4.0.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi blas 1.0 mkl
blessed 1.20.0 pypi_0 pypi blinker 1.7.0 pypi_0 pypi brotli-python 1.0.9 py39h6a678d5_7
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.11.17 hbcca054_0 conda-forge cachetools 5.3.2 pypi_0 pypi carvekit-colab 4.1.0 pypi_0 pypi certifi 2023.11.17 py39h06a4308_0
cffi 1.16.0 py39h5eee18b_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 pypi_0 pypi colorama 0.4.6 pyhd8ed1ab_0 conda-forge comm 0.2.0 pypi_0 pypi configargparse 1.7 pypi_0 pypi contourpy 1.2.0 pypi_0 pypi cryptography 41.0.7 py39hdda0065_0
cuda 11.6.1 0 nvidia cuda-cccl 11.6.55 hf6102b2_0 nvidia cuda-command-line-tools 11.6.2 0 nvidia cuda-compiler 11.6.2 0 nvidia cuda-cudart 11.6.55 he381448_0 nvidia cuda-cudart-dev 11.6.55 h42ad0f4_0 nvidia cuda-cuobjdump 11.6.124 h2eeebcb_0 nvidia cuda-cupti 11.6.124 h86345e5_0 nvidia cuda-cuxxfilt 11.6.124 hecbf4f6_0 nvidia cuda-driver-dev 11.6.55 0 nvidia cuda-gdb 12.3.101 0 nvidia cuda-libraries 11.6.1 0 nvidia cuda-libraries-dev 11.6.1 0 nvidia cuda-memcheck 11.8.86 0 nvidia cuda-nsight 12.3.101 0 nvidia cuda-nsight-compute 12.3.1 0 nvidia cuda-nvcc 11.6.124 hbba6d2d_0 nvidia cuda-nvdisasm 12.3.101 0 nvidia cuda-nvml-dev 11.6.55 haa9ef22_0 nvidia cuda-nvprof 12.3.101 0 nvidia cuda-nvprune 11.6.124 he22ec0a_0 nvidia cuda-nvrtc 11.6.124 h020bade_0 nvidia cuda-nvrtc-dev 11.6.124 h249d397_0 nvidia cuda-nvtx 11.6.124 h0630a44_0 nvidia cuda-nvvp 12.3.101 0 nvidia cuda-runtime 11.6.1 0 nvidia cuda-samples 11.6.101 h8efea70_0 nvidia cuda-sanitizer-api 12.3.101 0 nvidia cuda-toolkit 11.6.1 0 nvidia cuda-tools 11.6.1 0 nvidia cuda-visual-tools 11.6.1 0 nvidia cycler 0.12.1 pypi_0 pypi dash 2.14.2 pypi_0 pypi dash-core-components 2.0.0 pypi_0 pypi dash-html-components 2.0.0 pypi_0 pypi dash-table 5.0.0 pypi_0 pypi decorator 5.1.1 pypi_0 pypi diffusers 0.20.0 pypi_0 pypi einops 0.7.0 pypi_0 pypi exceptiongroup 1.2.0 pypi_0 pypi executing 2.0.1 pypi_0 pypi fastapi 0.105.0 pypi_0 pypi fastjsonschema 2.19.0 pypi_0 pypi ffmpeg 4.3 hf484d3e_0 pytorch filelock 3.13.1 pypi_0 pypi flask 3.0.0 pypi_0 pypi fonttools 4.46.0 pypi_0 pypi freetype 2.12.1 h4a9f257_0
freqencoder 0.0.0 pypi_0 pypi frozenlist 1.4.1 pypi_0 pypi fsspec 2023.12.2 pypi_0 pypi fvcore 0.1.5.post20221221 pyhd8ed1ab_0 conda-forge gds-tools 1.8.1.2 0 nvidia giflib 5.2.1 h5eee18b_3
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.25.2 pypi_0 pypi google-auth-oauthlib 1.2.0 pypi_0 pypi gpustat 1.1.1 pypi_0 pypi gridencoder 0.0.0 pypi_0 pypi grpcio 1.60.0 pypi_0 pypi h11 0.14.0 pypi_0 pypi huggingface-hub 0.19.4 pypi_0 pypi idna 3.4 py39h06a4308_0
imageio 2.33.1 pypi_0 pypi imageio-ffmpeg 0.4.9 pypi_0 pypi importlib-metadata 7.0.0 pypi_0 pypi importlib-resources 6.1.1 pypi_0 pypi intel-openmp 2023.1.0 hdb19cb5_46306
iopath 0.1.9 py39 iopath ipython 8.18.1 pypi_0 pypi ipywidgets 8.1.1 pypi_0 pypi itsdangerous 2.1.2 pypi_0 pypi jedi 0.19.1 pypi_0 pypi jinja2 3.1.2 pypi_0 pypi joblib 1.3.2 pypi_0 pypi jpeg 9e h5eee18b_1
jsonschema 4.20.0 pypi_0 pypi jsonschema-specifications 2023.11.2 pypi_0 pypi jupyter-core 5.5.0 pypi_0 pypi jupyterlab-widgets 3.0.9 pypi_0 pypi kiwisolver 1.4.5 pypi_0 pypi kornia 0.7.0 pypi_0 pypi lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.9.2.110 h5e84587_0 nvidia libcublas-dev 11.9.2.110 h5c901ab_0 nvidia libcufft 10.7.1.112 hf425ae0_0 nvidia libcufft-dev 10.7.1.112 ha5ce4c0_0 nvidia libcufile 1.8.1.2 0 nvidia libcufile-dev 1.8.1.2 0 nvidia libcurand 10.3.4.101 0 nvidia libcurand-dev 10.3.4.101 0 nvidia libcusolver 11.3.4.124 h33c3c4e_0 nvidia libcusparse 11.7.2.124 h7538f96_0 nvidia libcusparse-dev 11.7.2.124 hbbe9722_0 nvidia libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libnpp 11.6.3.124 hd2722f0_0 nvidia libnpp-dev 11.6.3.124 h3c42840_0 nvidia libnvjpeg 11.6.2.124 hd473ad6_0 nvidia libnvjpeg-dev 11.6.2.124 hb5906b9_0 nvidia libpng 1.6.39 h5eee18b_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libwebp 1.3.2 h11a3e52_0
libwebp-base 1.3.2 h5eee18b_0
lightning-utilities 0.10.0 pypi_0 pypi loguru 0.7.2 pypi_0 pypi lz4-c 1.9.4 h6a678d5_0
markdown 3.5.1 pypi_0 pypi markdown-it-py 3.0.0 pypi_0 pypi markupsafe 2.1.3 pypi_0 pypi matplotlib 3.8.2 pypi_0 pypi matplotlib-inline 0.1.6 pypi_0 pypi mdurl 0.1.2 pypi_0 pypi mkl 2023.1.0 h213fc3f_46344
mkl-service 2.4.0 py39h5eee18b_1
mkl_fft 1.3.8 py39h5eee18b_0
mkl_random 1.2.4 py39hdb19cb5_0
multidict 6.0.4 pypi_0 pypi nbformat 5.7.0 pypi_0 pypi ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.8 pypi_0 pypi nettle 3.7.3 hbbd107a_1
ninja 1.11.1.1 pypi_0 pypi nsight-compute 2023.3.1.1 0 nvidia numpy 1.26.2 py39h5f9d8c6_0
numpy-base 1.26.2 py39hb5e798b_0
nvdiffrast 0.3.1 pypi_0 pypi nvidia-ml-py 12.535.133 pypi_0 pypi oauthlib 3.2.2 pypi_0 pypi omegaconf 2.3.0 pypi_0 pypi open3d 0.17.0 pypi_0 pypi opencv-python 4.8.1.78 pypi_0 pypi openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.12 h7f8727e_0
packaging 23.2 pypi_0 pypi pandas 2.1.4 pypi_0 pypi parso 0.8.3 pypi_0 pypi pexpect 4.9.0 pypi_0 pypi pillow 10.0.1 py39ha6cbd5a_0
pip 23.3.1 py39h06a4308_0
platformdirs 4.1.0 pypi_0 pypi plotly 5.18.0 pypi_0 pypi portalocker 2.8.2 py39hf3d152e_1 conda-forge prompt-toolkit 3.0.43 pypi_0 pypi protobuf 4.23.4 pypi_0 pypi psutil 5.9.7 pypi_0 pypi ptyprocess 0.7.0 pypi_0 pypi pure-eval 0.2.2 pypi_0 pypi pyasn1 0.5.1 pypi_0 pypi pyasn1-modules 0.3.0 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0
pydantic 2.5.2 pypi_0 pypi pydantic-core 2.14.5 pypi_0 pypi pygments 2.17.2 pypi_0 pypi pymcubes 0.1.4 pypi_0 pypi pymeshlab 2023.12 pypi_0 pypi pyopenssl 23.2.0 py39h06a4308_0
pyparsing 3.1.1 pypi_0 pypi pyquaternion 0.9.9 pypi_0 pypi pysocks 1.7.1 py39h06a4308_0
python 3.9.18 h955ad1f_0
python-dateutil 2.8.2 pypi_0 pypi python-multipart 0.0.6 pypi_0 pypi python_abi 3.9 2_cp39 conda-forge pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch pytorch-cuda 11.6 h867d48c_1 pytorch pytorch-lightning 2.1.2 pypi_0 pypi pytorch-mutex 1.0 cuda pytorch pytorch3d 0.7.5 py39_cu116_pyt1130 pytorch3d pytz 2023.3.post1 pypi_0 pypi pyyaml 6.0 py39hb9d737c_4 conda-forge raymarching 0.0.0 pypi_0 pypi readline 8.2 h5eee18b_0
referencing 0.32.0 pypi_0 pypi regex 2023.10.3 pypi_0 pypi requests 2.31.0 py39h06a4308_0
requests-oauthlib 1.3.1 pypi_0 pypi retrying 1.3.4 pypi_0 pypi rich 13.7.0 pypi_0 pypi rpds-py 0.15.2 pypi_0 pypi rsa 4.9 pypi_0 pypi safetensors 0.4.1 pypi_0 pypi scikit-learn 1.3.2 pypi_0 pypi scipy 1.11.4 pypi_0 pypi setuptools 68.2.2 py39h06a4308_0
shencoder 0.0.0 pypi_0 pypi six 1.16.0 pypi_0 pypi sniffio 1.3.0 pypi_0 pypi sqlite 3.41.2 h5eee18b_0
stack-data 0.6.3 pypi_0 pypi starlette 0.27.0 pypi_0 pypi tabulate 0.9.0 pyhd8ed1ab_1 conda-forge taming-transformers-rom1504 0.0.6 pypi_0 pypi tbb 2021.8.0 hdb19cb5_0
tenacity 8.2.3 pypi_0 pypi tensorboard 2.15.1 pypi_0 pypi tensorboard-data-server 0.7.2 pypi_0 pypi tensorboardx 2.6.2.2 pypi_0 pypi termcolor 2.3.0 pyhd8ed1ab_0 conda-forge threadpoolctl 3.2.0 pypi_0 pypi tk 8.6.12 h1ccaba5_0
tokenizers 0.15.0 pypi_0 pypi torch-ema 0.3 pypi_0 pypi torchmetrics 1.2.1 pypi_0 pypi torchvision 0.14.0 py39_cu116 pytorch tqdm 4.66.1 pyhd8ed1ab_0 conda-forge traitlets 5.14.0 pypi_0 pypi transformers 4.36.1 pypi_0 pypi trimesh 4.0.5 pypi_0 pypi triton 2.1.0 pypi_0 pypi typing-extensions 4.9.0 pypi_0 pypi tzdata 2023.3 pypi_0 pypi urllib3 1.26.18 py39h06a4308_0
uvicorn 0.24.0.post1 pypi_0 pypi wcwidth 0.2.12 pypi_0 pypi werkzeug 3.0.1 pypi_0 pypi wheel 0.41.2 py39h06a4308_0
widgetsnbextension 4.0.9 pypi_0 pypi xatlas 0.0.8 pypi_0 pypi xformers 0.0.24+042abc8.d20231218 pypi_0 pypi xz 5.4.5 h5eee18b_0
yacs 0.1.8 pyhd8ed1ab_0 conda-forge yaml 0.2.5 h7f98852_2 conda-forge yarl 1.9.4 pypi_0 pypi zipp 3.17.0 pypi_0 pypi zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0

zhoushiwei commented 6 months ago

截屏2024-05-10 17 45 35 It looks like the configuration is the same as yours, I didn't change anything in my code, the only thing that's different is that my sd models are downloaded locally and then loaded because of network problems, I don't know what's wrong.

mdarhdarz commented 6 months ago

OK I know what happened. There may be some bug with SDXL-1.0. I only test SDXL-0.9 because SDXL-1.0 was not available around 2023/8. Additionally, you can use "export HF_ENDPOINT=https://hf-mirror.com" to download models from huggingface without VPN.

mdarhdarz commented 6 months ago

OK I know what happened. There may be some bug with SDXL-1.0. I only test SDXL-0.9 because SDXL-1.0 was not available around 2023/8. Additionally, you can use "export HF_ENDPOINT=https://hf-mirror.com" to download models from huggingface without VPN.

While your result is just like a blue mask, my result is not textureless. I am not sure changing model can solve your problem.

mdarhdarz commented 6 months ago

OK I know what happened. There may be some bug with SDXL-1.0. I only test SDXL-0.9 because SDXL-1.0 was not available around 2023/8. Additionally, you can use "export HF_ENDPOINT=https://hf-mirror.com" to download models from huggingface without VPN.

SDXL-1.0 needs at least 7 epochs (2800 steps) to get a resonable texture: panda

zhoushiwei commented 6 months ago

OK I know what happened. There may be some bug with SDXL-1.0. I only test SDXL-0.9 because SDXL-1.0 was not available around 2023/8. Additionally, you can use "export HF_ENDPOINT=https://hf-mirror.com" to download models from huggingface without VPN.

SDXL-1.0 needs at least 7 epochs (2800 steps) to get a resonable texture: panda

Thanks for the reply, I'll try it out later, one question , why does the tone of these results look a bit strange to you (including the results in the paper) You mean it's ok to use sdxl 1.0, just need to raise the epochs to more than 7, right?

zhoushiwei commented 6 months ago

Hi,I'm trying to generate a texture with the guidance of dodging multiple images all the time, I see that the provided sample panda generates quite good results, but I use my own images and mesh to generate very poor results, is there any point to pay attention to here?

mdarhdarz commented 6 months ago

Hi,I'm trying to generate a texture with the guidance of dodging multiple images all the time, I see that the provided sample panda generates quite good results, but I use my own images and mesh to generate very poor results, is there any point to pay attention to here?

I'm not quite sure what 'guidance of dodging multiple images' refer to. In my experiments, the combination of reference image and SDXL SDS did not work well due to conflict gradients around the boundary of reference image. SD1.5 with reference image is just a implemented function which I did not study much. For failure of solely SDXL SDS, prompt and background have a large influence. For the text prompt, the success rate of all my used prompts is about 60-70% (same quality as the paper and suppl). For the background image, note that the background image used now is 'data/background.png'. I found that solid white background hurts the generation quality, so I have tried many images. I think a suitable background will help a lot.

zhoushiwei commented 6 months ago

Hi,I'm trying to generate a texture with the guidance of dodging multiple images all the time, I see that the provided sample panda generates quite good results, but I use my own images and mesh to generate very poor results, is there any point to pay attention to here?

I'm not quite sure what 'guidance of dodging multiple images' refer to. In my experiments, the combination of reference image and SDXL SDS did not work well due to conflict gradients around the boundary of reference image. SD1.5 with reference image is just a implemented function which I did not study much. For failure of solely SDXL SDS, prompt and background have a large influence. For the text prompt, the success rate of all my used prompts is about 60-70% (same quality as the paper and suppl). For the background image, note that the background image used now is 'data/background.png'. I found that solid white background hurts the generation quality, so I have tried many images. I think a suitable background will help a lot.

en ,I use the SD1.5 with reference image now,I mean the results run with your panda samples are ok, but my results with my own samples are very poor, is there a trick to it? There is another problem now it seems that it doesn't support multi-image input, I changed your code to make it support multiple images at the same time but it seems to be buggy

zhoushiwei commented 6 months ago

Regarding the issue with background images, I think it would be better to first separate the reference image from the background, then use a uniform background image for all. This way, the effect be better?

zhoushiwei commented 6 months ago

toy_terrier.obj.txt front fine_df_ep0010_0001_rgb it seems the result is bad

mdarhdarz commented 6 months ago

Maybe misalignment exists between reference image and reference camera (the index-0 data? I forget, check nerf/provider.py), which may require some manual efforts. The strange color can be attributed to many factors. model, prompt, even focal range. In fact, I haven't known the effect of shuffle control so far...

mdarhdarz commented 6 months ago

Maybe misalignment exists between reference image and reference camera (the index-0 data? I forget, check nerf/provider.py), which may require some manual efforts. The strange color can be attributed to many factors. model, prompt, even focal range. In fact, I haven't known the effect of shuffle control so far...

Since there is a L2 loss between reference image and reference camera, there must be some errors. I also meet strange colors many times, if you don't mind some minor perturbations to the mesh,you can allow mesh changing with a very small learning rate, as shown in the middle of Figure 4 in the paper.

zhoushiwei commented 5 months ago

Thanks for reply. the reference image and reference camera Alignment is basically impossible with the reality of the task。I'm mainly trying to go through an image right to make a guide that generates texture maps related to the image. I don't know if you have any better strategies for improvement

mdarhdarz commented 5 months ago

I once considered this, but it was put on one side due to other projects. Using stylization controlnet is my first try. I just check meshy and find they also do not support this function. Good luck.

zhoushiwei commented 5 months ago

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion have u read this ,Can the methods in this section be incorporated into your project?

zhoushiwei commented 5 months ago

I once considered this, but it was put on one side due to other projects. Using stylization controlnet is my first try. I just check meshy and find they also do not support this function. Good luck.

Yes, currently there are basically no similar products on the market. I tried aligning the reference images absolutely, and it worked well for transferring the textures. Do you think this approach could be applied to 3D facial reconstruction?