Closed CrossPr0duct closed 7 months ago
Can you provide a fully reproducible code snippet please?
yeah let me get a stripped version of this
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Does this still happen with peft
installed? peft
should be able to keep the memory requirements at bay as well. If not, feel free to re-open.
Describe the bug
I tried self.pipeline.load_lora_weights(lora_path,low_cpu_mem_usage=False, ignore_mismatched_sizes=True) but it doesn't seem to take.
runtimeError: Error(s) in loading state_dict for LoRALinearLayer: size mismatch for down.weight: copying a param with shape torch.Size([128, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 320]). size mismatch for up.weight: copying a param with shape torch.Size([320, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 128]).
Reproduction
Load Any LoRA
Logs
System Info
accelerate==0.26.1 antlr4-python3-runtime==4.9.3 certifi==2022.12.7 charset-normalizer==2.1.1 controlnet-aux==0.0.3 diffusers==0.23.1 einops==0.7.0 filelock==3.9.0 fsspec==2023.10.0 huggingface-hub==0.18.0 icefall==1.0 idna==3.4 imageio==2.33.1 importlib-metadata==7.0.1 Jinja2==3.1.2 lazy_loader==0.3 MarkupSafe==2.1.2 mpmath==1.3.0 networkx==3.0 numpy==1.24.1 omegaconf==2.3.0 opencv-python==4.9.0.80 packaging==23.2 Pillow==9.3.0 psutil==5.9.7 PyYAML==6.0.1 regex==2023.10.3 requests==2.28.1 ruff==0.1.3 safetensors==0.4.0 scikit-image==0.22.0 scipy==1.11.4 sympy==1.12 tifffile==2023.12.9 timm==0.9.12 tokenizers==0.13.3 torch==2.1.0+cu118 torchaudio==2.1.0+cu118 torchsde==0.2.6 torchvision==0.16.0+cu118 tqdm==4.66.1 trampoline==0.1.2 transformers==4.30.2 triton==2.1.0 typing_extensions==4.4.0 urllib3==1.26.13 xformers==0.0.22.post7+cu118 zipp==3.17.0
Who can help?
@yiyixuxu @DN6 @sayakpaul @patrickvonplaten