Open xingyouxin opened 1 month ago
I met this error on 3090 GPU. NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
I met this error on 3090 GPU. NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Yes, so do I when using the flux-dev-fp8.safetensors (11 GB). I also tried flux1-dev.safetensors (22 GB), resulting in a memory error.
I met this error on 3090 GPU. NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Yes, so do I when using the flux-dev-fp8.safetensors (11 GB). I also tried flux1-dev.safetensors (22 GB), resulting in a memory error.
Have you tried sd_scripts for FLUX Lora training? https://github.com/kohya-ss/sd-scripts/tree/99744af53afcb750b9a64b7efafe51f3f0da8826 I see it works with 24GB VRAM GPUs.
I met this error on 3090 GPU. NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Yes, so do I when using the flux-dev-fp8.safetensors (11 GB). I also tried flux1-dev.safetensors (22 GB), resulting in a memory error.
Have you tried sd_scripts for FLUX Lora training? https://github.com/kohya-ss/sd-scripts/tree/99744af53afcb750b9a64b7efafe51f3f0da8826 I see it works with 24GB VRAM GPUs.
Sorry, I didn't. But I have tried ai-toolkit for Flux LoRA training, and tested on web-ui successfully. https://github.com/ostris/ai-toolkit/ Now, I am testing 青龙圣者(Qinglongshengzhe)'s framework. If you are interested in it, don't hesitate to get in touch with me at QQ (593851428). https://www.bilibili.com/video/BV1RW421977P/?spm_id_from=333.788&vd_source=85be3ec5e95fd23384a835c58edf1296 We can do a great job together.
I try to train x-flux with LoRA on RTX 3090 GPU. But it seems that out of memory.