Open Taxalfer opened 10 months ago
Yes. Xformers is required. If you use different CUDA version, you may consider installing xformers with suitable versions.
Thank you for your reply! I have tried Xformers==0.0.13, but one function has a different prototype than version 0.0.16, and it doesn't work. The error details are as follows:
Traceback (most recent call last):
File "test_mvdiffusion_seq.py", line 335, in
So maybe my CUDA environment just won't be able to run your project successfully.
I tried to run CUDA11.3 directly with the xformers version you gave in requirements.txt and it worked, please forgive me for my carelessness.
I tried to run CUDA11.3 directly with the xformers version you gave in requirements.txt and it worked, please forgive me for my carelessness.
which version of the xformers did you use in the end?
My CUDA version is 11.3, while xformers==0.0.16 need higher CUDA version, so I try to run the code with 'enable_xformers_memory_efficient_attention: false'. But it throw the error below: Traceback (most recent call last): File "test_mvdiffusion_seq.py", line 335, in
main(cfg)
File "test_mvdiffusion_seq.py", line 291, in main
log_validation_joint(
File "test_mvdiffusion_seq.py", line 205, in log_validation_joint
out = pipeline(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 448, in call
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, class_labels=camera_embeddings).sample
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_condition.py", line 966, in forward
sample, res_samples = downsample_block(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_blocks.py", line 858, in forward
hidden_states = attn(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 314, in forward
hidden_states = block(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 544, in forward
attn_output = self.attn1(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 322, in forward
return self.processor(
TypeError: call() got an unexpected keyword argument 'cross_domain_attention'