xxlong0 / Wonder3D

Single Image to 3D using Cross-Domain Diffusion for 3D Generation
https://www.xxlong.site/Wonder3D/
GNU Affero General Public License v3.0
4.71k stars 373 forks source link

Must I run this repo with xformers? #59

Open Taxalfer opened 10 months ago

Taxalfer commented 10 months ago

My CUDA version is 11.3, while xformers==0.0.16 need higher CUDA version, so I try to run the code with 'enable_xformers_memory_efficient_attention: false'. But it throw the error below: Traceback (most recent call last): File "test_mvdiffusion_seq.py", line 335, in main(cfg) File "test_mvdiffusion_seq.py", line 291, in main log_validation_joint( File "test_mvdiffusion_seq.py", line 205, in log_validation_joint out = pipeline( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 448, in call noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, class_labels=camera_embeddings).sample File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_condition.py", line 966, in forward sample, res_samples = downsample_block( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_blocks.py", line 858, in forward hidden_states = attn( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 314, in forward hidden_states = block( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 544, in forward attn_output = self.attn1( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 322, in forward return self.processor( TypeError: call() got an unexpected keyword argument 'cross_domain_attention'

flamehaze1115 commented 10 months ago

Yes. Xformers is required. If you use different CUDA version, you may consider installing xformers with suitable versions.

Taxalfer commented 10 months ago

Thank you for your reply! I have tried Xformers==0.0.13, but one function has a different prototype than version 0.0.16, and it doesn't work. The error details are as follows:
image image

Traceback (most recent call last): File "test_mvdiffusion_seq.py", line 335, in main(cfg) File "test_mvdiffusion_seq.py", line 291, in main log_validation_joint( File "test_mvdiffusion_seq.py", line 205, in log_validation_joint out = pipeline( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 448, in call noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, class_labels=camera_embeddings).sample File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_condition.py", line 966, in forward sample, res_samples = downsample_block( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_blocks.py", line 858, in forward hidden_states = attn( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 314, in forward hidden_states = block( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 572, in forward attn_output = self.attn2( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 322, in forward return self.processor( File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 1034, in call hidden_states = xformers.ops.memory_efficient_attention( TypeError: memory_efficient_attention() got an unexpected keyword argument 'scale'

So maybe my CUDA environment just won't be able to run your project successfully.

Taxalfer commented 10 months ago

I tried to run CUDA11.3 directly with the xformers version you gave in requirements.txt and it worked, please forgive me for my carelessness.

StarShang commented 7 months ago

I tried to run CUDA11.3 directly with the xformers version you gave in requirements.txt and it worked, please forgive me for my carelessness.

which version of the xformers did you use in the end?