Open ieya114 opened 7 months ago
I think it is related to some Google update. I use Automatic1111 from The Last Ben and also stopped working for the same reason.
I ran TheLastBens sd auto1111 colab and also since Friday I get the following error message:
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu118 with CUDA 1106 (you have 2.1.0+cu121) Python 3.9.16 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details
Is it within Colab/ notebook version issues or is it in someway connected to my local versions? Because which python --version says I'm running on 3.12.
Yeah, as I said above almost every colab related to SD is broken due to Google latest update. There is a temporary fix, I don't know if it will work for Kohya. Check out my topic on The Last Ben's discussion group.
The cuda fix is pushed, it may still showing cuda is not initialized or something but I finished a LoRA training without error, the problem actually in bitsandbytes version.
The cuda fix is pushed, it may still showing cuda is not initialized or something but I finished a LoRA training without error, the problem actually in bitsandbytes version.
It can works, thanks for your efforts. Your trainer has the best training quality among other Colab notes.
I'm able to train using DAdaption but not Adam8bit as a result of this.
CUDA backend failed to initialize: Found cuDNN version 8700, but JAX was built against version 8904, which is newer. The copy of cuDNN that is installed must be at least as new as the version against which JAX was built. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Nothing is working with this error. How can I solve this problem?