Open Azrox01 opened 2 months ago
Similar experience - only when using xlabs flux nodes. (running on Colab Pro)
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.
I still have this issue.
If using flux, try not to select default for weight_dtype of the LoadDiffusionModel node. But I also encountered this bug when I did not apply flux.
Your question
it was working fine yesterday but now I am having this error... I don't know why, it's my first time using an image-generation model so I don't know what to do. it is working fine using an image from ComfyUI_examples as a workflow, it is occurring when I am using a workflow with LoRA or I don't know if something else is happening in the background, but it was working fine before even with the LoRA and now its giving Allocation on device error.
Logs
System Information
ComfyUI Version: v0.2.2-43-ge813abb
Arguments: ComfyUI\main.py --windows-standalone-build
OS: nt
Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Embedded Python: true
PyTorch Version: 2.4.1+cu124
Devices
Name: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Logs
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Additional Context
(Please add any additional context or steps to reproduce the error here)