comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
52.57k stars 5.55k forks source link

How can I get comfyui to use both my Nvidia RTX 3090 and 3090TI 24GB vram GPUs to train FLUX.1 Dev #4818

Closed zonkers72 closed 1 month ago

zonkers72 commented 1 month ago

Your question

How can I get comfyui to use both my Nvidia RTX 3090 and 3090TI 24GB vram GPUs to train FLUX.1 Dev?

Logs

User-desktop:~/Documents/flux/comfyui/ComfyUI$ python3 main.py
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-09-06  
** Platform: Linux
** Python version: 3.10.12  
** Python executable: /usr/bin/python3
** ComfyUI Path: /home/user/Documents/flux/comfyui/ComfyUI
** Log path: /home/user/Documents/flux/comfyui/ComfyUI/comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: /home/user/Documents/flux/comfyui/ComfyUI/custom_nodes/rgthree-comfy
   0.6 seconds: /home/user/Documents/flux/comfyui/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 24149 MB, total RAM 257568 MB
pytorch version: 2.3.0+cu118
xformers version: 0.0.26.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 Ti : cudaMallocAsync
Using xformers cross attention
[Prompt Server] web root: /home/user/Documents/flux/comfyui/ComfyUI/web
(pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, AzureExecutionProvider, CPUExecutionProvider
(pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider
### Loading: ComfyUI-Inspire-Pack (V1.1)
/home/user/.local/lib/python3.10/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
2024-09-06 23:06:35.657719: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-06 23:06:35.671348: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-06 23:06:35.671373: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-06 23:06:35.680785: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-06 23:06:36.331140: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/user/.local/lib/python3.10/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/home/user/.local/lib/python3.10/site-packages/matplotlib/projections/__init__.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
  warnings.warn("Unable to import Axes3D. This may be due to multiple versions of "
Total VRAM 24149 MB, total RAM 257568 MB
pytorch version: 2.3.0+cu118
xformers version: 0.0.26.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 Ti : cudaMallocAsync
[Crystools INFO] Crystools version: 1.16.6
[Crystools INFO] CPU: AMD Ryzen Threadripper PRO 5965WX 24-Cores - Arch: x86_64 - OS: Linux 6.8.0-40-generic
[Crystools INFO] Pynvml (Nvidia) initialized.
[Crystools INFO] GPU/s:
[Crystools INFO] 0) NVIDIA GeForce RTX 3090
[Crystools INFO] 1) NVIDIA GeForce RTX 3090 Ti
[Crystools INFO] NVIDIA Driver: 555.42.06

[rgthree] Loaded 42 extraordinary nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.

Current version of toml: 0.10.2
Current version of voluptuous: 0.13.1
Current version of transformers: 4.44.2
Current version of bitsandbytes: 0.43.3
Current version of cv2: 4.10.0
Current version of accelerate: 0.34.2
Current version of tensorboardX: 2.6.2.2
Current version of tensorboard: 2.16.2
Current version of xformers: 0.0.26.post1
Current version of diffusers: 0.25.0
LORA-Training-in-Comfy: Loaded
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `/home/user/Documents/flux/comfyui/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 218 nodes successfully.

Other

No response

ltdrdata commented 1 month ago

This issue should be moved to related repo.

zonkers72 commented 1 month ago

This issue should be moved to related repo.

can you send the link to said related repo, or can you help me out

ltdrdata commented 1 month ago

This issue should be moved to related repo.

can you send the link to said related repo, or can you help me out

You are using Lora-Training-in-Comfy https://github.com/LarryJane491/Lora-Training-in-Comfy