Open littleyeson opened 6 months ago
yeson@Yai-linux:~/AI/kohya_ss$ ./gui.sh
22:25:30-336035 INFO Kohya_ss GUI version: v24.0.6
22:25:30-418524 INFO Submodule initialized and updated.
22:25:30-420240 INFO nVidia toolkit detected
22:25:33-087224 INFO Torch 2.1.2+cu118
22:25:33-163018 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8907
22:25:33-196222 INFO Torch detected GPU: Tesla V100-SXM2-16GB VRAM 16384
Arch (7, 0) Cores 80
22:25:33-198027 INFO Torch detected GPU: NVIDIA GeForce RTX 2080 Ti VRAM
22528 Arch (7, 5) Cores 68
22:25:33-205281 INFO Python version is 3.10.12 (main, Nov 20 2023, 15:14:05)
[GCC 11.4.0]
22:25:33-206629 INFO Verifying modules installation status from
/home/yeson/AI/kohya_ss/requirements_linux.txt...
22:25:33-211382 INFO Verifying modules installation status from
requirements.txt...
22:25:39-597028 INFO headless: False
22:25:39-671550 INFO Using shell=True when running external commands...
2024-04-25 22:25:40.490336: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-04-25 22:25:40.818537: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-25 22:25:40.818726: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-25 22:25:40.884024: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-25 22:25:41.018502: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2024-04-25 22:25:41.020495: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-25 22:25:42.725632: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
nvcc -V can read the cuda,and can load GPU name but trainning cannot not use cuda and GPU