justinpinkney / stable-diffusion

MIT License
1.45k stars 267 forks source link

AttributeError: 'FrozenCLIPImageEmbedder' object has no attribute 'transformer' #13

Closed deepxuexi closed 2 years ago

deepxuexi commented 2 years ago

I have a problem. How can I solve it? The error message is as follows: D:\SDcondition\stable-diffusion>python scripts/image_variations.py Loading model from models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Keeping EMAs of 688. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Traceback (most recent call last): File "scripts/image_variations.py", line 125, in fire.Fire(main) File "C:\Python38\lib\site-packages\fire\core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "C:\Python38\lib\site-packages\fire\core.py", line 466, in _Fire component, remaining_args = _CallAndUpdateTrace( File "C:\Python38\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, kwargs) File "scripts/image_variations.py", line 103, in main model = load_model_from_config(config, ckpt, device=device) File "scripts/image_variations.py", line 29, in load_model_from_config model = instantiate_from_config(config.model) File "d:\sdcondition\stable-diffusion\ldm\util.py", line 83, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "d:\sdcondition\stable-diffusion\ldm\models\diffusion\ddpm.py", line 523, in init self.instantiate_cond_stage(cond_stage_config) File "d:\sdcondition\stable-diffusion\ldm\models\diffusion\ddpm.py", line 581, in instantiate_cond_stage model = instantiate_from_config(config) File "d:\sdcondition\stable-diffusion\ldm\util.py", line 83, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "d:\sdcondition\stable-diffusion\ldm\modules\encoders\modules.py", line 209, in init self.freeze() File "d:\sdcondition\stable-diffusion\ldm\modules\encoders\modules.py", line 215, in freeze self.transformer = self.transformer.eval() File "C:\Python38\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'FrozenCLIPImageEmbedder' object has no attribute 'transformer'

osfa commented 2 years ago

same issue, I'm guessing this has something to do with conda environments and naming but not sure (trying to import the wrong transformer lib somehow). is the environment.yaml stale for this repo?

osfa commented 2 years ago

no that wasn't it. this seems relevant though:

# I didn't call this originally, but seems like it was frozen anyway self.freeze() in modules.py seems to throw this? if I comment out self.freeze() I get past this error and into an OOM error instead. I'm on 24GB VRAM? should that not be enough?

osfa commented 2 years ago

and the oom:ing was due to n_samples=4 ofc. it runs! so the self.freeze() seems to be the culprit here.

mhnoni commented 2 years ago

Same issue here, I think it's due to PyTorch version conflict with fire, if someone could fix it, let us know.

gald89 commented 2 years ago

I've set up two different docker images (one based on rapidsai-core and the other based on pytorch) and both throw the same AttributeError.

petarov commented 2 years ago

I've commented out lines: 212 and 158 : self.transformer = self.transformer.eval() in ldm/modules/encoders/modules.py and it seems to run at http://127.0.0.1:7860/.

mhnoni commented 2 years ago

I've commented out lines: 212 and 158 : self.transformer = self.transformer.eval() in ldm/modules/encoders/modules.py and it seems to run at http://127.0.0.1:7860/.

Any idea what kind of effect this will have? or what is self.transformer anyway?

petarov commented 2 years ago

I've commented out lines: 212 and 158 : self.transformer = self.transformer.eval() in ldm/modules/encoders/modules.py and it seems to run at http://127.0.0.1:7860/.

Any idea what kind of effect this will have? or what is self.transformer anyway?

Probably an OOM as @osfa stated above (I read his comment after posting mine.)

In any case I was unable to run this on my Mac because it lacks an NVIDIA hardware, so I opted for an lambdalabs instance and I can confirm that it works. I tested about 10 inputs and GPU mem usage did not go beyond 21983MiB.

mhnoni commented 2 years ago

I've commented out lines: 212 and 158 : self.transformer = self.transformer.eval() in ldm/modules/encoders/modules.py and it seems to run at http://127.0.0.1:7860/.

Any idea what kind of effect this will have? or what is self.transformer anyway?

Probably an OOM as @osfa stated above (I read his comment after posting mine.)

In any case I was unable to run this on my Mac because it lacks an NVIDIA hardware, so I opted for an lambdalabs instance and I can confirm that it works. I tested about 10 inputs and GPU mem usage did not go beyond 21983MiB.

oh, that makes sense, thanks for the explanation.

justinpinkney commented 2 years ago

no that wasn't it. this seems relevant though:

#I didn't call this originally, but seems like it was frozen anyway self.freeze() in modules.py seems to throw this? if I comment out self.freeze() I get past this error and into an OOM error instead. I'm on 24GB VRAM? should that not be enough?

This is fixed now