Open DarkViewAI opened 2 months ago
Having the same issue with a lora I made, Im getting: LoRA version mismatch for KModel:
I believe this is what comfy did to handle them https://github.com/comfyanonymous/ComfyUI/commit/d043997d30d91ab057f770d3396c2e288e37b38a
[LORA] LoRA version mismatch for KModel
loras trained on onetrainer don't seem to work on forge. they get unloaded and results are quite bad. it just ignores the lora for being loaded. they also don't work on comfy. https://github.com/comfyanonymous/ComfyUI/issues/4695
[Unload] Trying to free 13558.57 MB for cuda:0 with 0 models keep loaded ... Unload model IntegratedAutoencoderKL Done. [Memory Management] Target: JointTextEncoder, Free GPU: 22877.65 MB, Model Require: 9641.98 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 12211.66 MB, All loaded to GPU. Moving model(s) has taken 1.70 seconds Distilled CFG Scale: 3.5 [Unload] Trying to free 31015.46 MB for cuda:0 with 0 models keep loaded ... Unload model JointTextEncoder Done. [Memory Management] Target: KModel, Free GPU: 22876.50 MB, Model Require: 22700.13 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -847.64 MB, CPU Swap Loaded (blocked method): 2142.00 MB, GPU Loaded: 20558.13 MB Moving model(s) has taken 5.53 seconds