Closed AlgorithmicKing closed 6 months ago
please check our tutorial https://youtu.be/N_LUxd6G2nk?si=1tCYvnkhvByelh54
same error
You need to use valid lora models
for example?
this is the model i used isnt it valid? https://civitai.com/models/267242/proteus
oh nvm i figured it out it was a checkpoint and not a lora model. so now can i use checkpoints in fastsd cpu?
yes it is not a lora model
can i use checkpoint models on fastsd cpu
is it true?: gemini: FastSD CPU does not currently support loading custom checkpoint models. It is designed to work with specific models optimized for faster inference on CPUs.
However, FastSD CPU does support:
LCM (Latent Consistency Models): These models are specifically designed for fast inference on CPUs. OpenVINO Models (SDXS-512-0.9): These models leverage the OpenVINO toolkit for accelerated inference on Intel CPUs. LCM-LoRA: This is a special type of model that can be used to fine-tune existing models or adapt them to specific styles. You can find more information about the supported models and how to use them on the FastSD CPU GitHub page: https://github.com/rupeshs/fastsdcpu
The developers of FastSD CPU are actively working on adding new features, so it's possible that custom checkpoint support might be added in the future. You can keep an eye on the project's updates on GitHub for any announcements.
Please note that loading custom checkpoint models might require additional technical knowledge and modifications to the software. If you are not familiar with these processes, it is recommended to wait for official support or seek assistance from the FastSD CPU community.
anyways thanks a lot
FastSD supports diffuser model format (safetensors) not single file checkpoint
ok
i am getting this error when trying to load lora model: