Open Jimmyshi7 opened 1 year ago
try again, I have fixed the bug.
you can also use the app.py to run several apps at the same time.
you can also use the app.py to run several apps at the same time.
OK! I have started up the demo successfully. Thank you very much.
editany.py still meet the error.
I ran "sam2edit.py". When I upload an image to run, an such error occured.
File "/project/EditAnything/sam2edit_lora.py", line 615, in process x_samples_tile = self.tile_pipe( File "/project/EditAnything/utils/stable_diffusion_controlnet_inpaint.py", line 1571, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/controlnet.py", line 526, in forward sample, res_samples = downsample_block( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 867, in forward hidden_states = attn( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 265, in forward hidden_states = block( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/attention.py", line 331, in forward attn_output = self.attn2( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 267, in forward return self.processor( File "/root/miniconda3/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 689, in call key = attn.to_k(encoder_hidden_states) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x1024 and 768x320)