InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
I crerated a LoRa using FlexGym, carefully leaving all settings on default, except for epochs. When importing this LoRa with model manager, it produces the error.
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
No response
GPU VRAM
No response
Version number
5.0.0
Browser
Brave
Python dependencies
No response
What happened
I crerated a LoRa using FlexGym, carefully leaving all settings on default, except for epochs. When importing this LoRa with model manager, it produces the error.
What you expected to happen
Lora be imported.
How to reproduce the problem
LoRa here: https://zippyshare.day/0kDGvKITOanFlBt/file Download and try to import.
Additional context
This LoRa works successfully in Forge. Has to set mode manually to "bnb-nf4 (fp16 LoRa)". And in Comfy.
Discord username
barafu_albino