PixArt-alpha / PixArt-sigma

PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation
https://pixart-alpha.github.io/PixArt-sigma-project/
GNU Affero General Public License v3.0
1.44k stars 68 forks source link

Cannot load a Lora safetensors file #47

Closed ukaprch closed 2 months ago

ukaprch commented 2 months ago

my code: adapter_id = "C:/Users/kaprc/.cache/huggingface/hub/lora/add-detail-xl.safetensors" from peft import PeftModel transformer = PeftModel.from_pretrained(transformer, adapter_id) <=== ERROR BELOW

Message=Can't find 'adapter_config.json' at 'C:/Users/kaprc/.cache/huggingface/hub/lora/add-detail-xl.safetensors' Source=C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\peft\config.py StackTrace: File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\peft\config.py", line 197, in _get_peft_type config_file = hf_hub_download( File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\huggingface_hub\utils_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\huggingface_hub\utils_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'C:/Users/kaprc/.cache/huggingface/hub/lora/add-detail-xl.safetensors'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\peft\config.py", line 203, in _get_peft_type raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'") File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\peft\peft_model.py", line 328, in from_pretrained PeftConfig._get_peft_type( File "C:\Users\kaprc\source\repos\AI\modules\Inpaint-Anything\stable_diffusion_inpaint.py", line 248, in text_2_image_with_pixart transformer = PeftModel.from_pretrained(transformer, adapter_id) File "C:\Users\kaprc\source\repos\AI\modules\Inpaint-Anything\app\app.py", line 215, in text_2_img_pixart (Current frame) text_2_img = text_2_image_with_pixart( File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\gradio\utils.py", line 661, in wrapper response = f(*args, *kwargs) File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\kaprc\source\repos\AI\runtimes\bin\windows\Python38\Lib\threading.py", line 890, in _bootstrap self._bootstrap_inner() ValueError: Can't find 'adapter_config.json' at 'C:/Users/kaprc/.cache/huggingface/hub/lora/add-detail-xl.safetensors'

chrish-slingshot commented 2 months ago

1.5 Loras won't work with this model.

ukaprch commented 2 months ago

They are SDXL loras, not 1.5 SD loras.

chrish-slingshot commented 2 months ago

Same problem though, PixArt isn't SDXL. Loras (if and when Sigma support them) will need to be trained on their base model.

lawrence-cj commented 2 months ago

Released lora related code

chrish-slingshot commented 2 months ago

That's awesome! Thanks for your hard work.