YangLing0818 / RealCompo

RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models
https://arxiv.org/abs/2402.12908
101 stars 3 forks source link

When I run the example , I get the error. #2

Open Maybeetw opened 4 months ago

Maybeetw commented 4 months ago

Traceback (most recent call last): File "inference.py", line 334, in run(meta, args, starting_noise) File "inference.py", line 275, in run samples_fake = sampler.sample(S=steps, shape=shape, input=input, uc=uc, guidance_scale=config.guidance_scale, mask=inpainting_mask, x0=z0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 128, in sample return self.plms_sampling(shape, input, uc, guidance_scale, mask=mask, x0=x0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 166, in plms_sampling attn_layout, attn_text = self.get_attention_maps(ts, img, input) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 78, in get_attention_maps e_t_text = self.text_unet(input2["x"], input2["timesteps"], input2["context"]).sample File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py", line 970, in forward sample = upsample_block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_blocks.py", line 2134, in forward hidden_states = attn( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/transformer_2d.py", line 292, in forward hidden_states = block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/attention.py", line 171, in forward attn_output = self.attn2( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 211, in forward attention_probs = controller(attention_probs, is_cross, place_in_unet) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 53, in call self.between_steps() File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 85, in between_steps self.attention_store[key][i] += self.step_store[key][i] RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked).

YangLing0818 commented 4 months ago

Thank you for your attention to our project, we have reevaluated our code, and it seems works well without the problem you mentioned. For this issue, please ensure that your environment is configured according to our provided "installation" process.

AdventureStory commented 4 months ago

I meet the same problem.

Cominclip commented 4 months ago

Traceback (most recent call last): File "inference.py", line 334, in run(meta, args, starting_noise) File "inference.py", line 275, in run samples_fake = sampler.sample(S=steps, shape=shape, input=input, uc=uc, guidance_scale=config.guidance_scale, mask=inpainting_mask, x0=z0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 128, in sample return self.plms_sampling(shape, input, uc, guidance_scale, mask=mask, x0=x0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 166, in plms_sampling attn_layout, attn_text = self.get_attention_maps(ts, img, input) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 78, in get_attention_maps e_t_text = self.text_unet(input2["x"], input2["timesteps"], input2["context"]).sample File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py", line 970, in forward sample = upsample_block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_blocks.py", line 2134, in forward hidden_states = attn( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/transformer_2d.py", line 292, in forward hidden_states = block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/attention.py", line 171, in forward attn_output = self.attn2( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 211, in forward attention_probs = controller(attention_probs, is_cross, place_in_unet) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 53, in call** self.between_steps() File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 85, in between_steps self.attention_store[key][i] += self.step_store[key][i] RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked).

I meet the same problem.

Thank you for your question. We have modified the code and there is no problem at present

AdventureStory commented 4 months ago

Thank you for your quick reply~

Maybeetw commented 4 months ago

Thank you for your quick answer, perfect work!