Open BLABZ opened 1 year ago
how to reproduce the error ?
Oh hello there! Great to meet you by the way! You seem to do some really great work! :) As for the copy of the Collab..... I Do have a higher number of LoRAs that I've downloaded from CivitAi into the Workbook, but I figured that wouldn't entirely break things. I used a helpful bit of code from a mod in a discord server that helps with the AUTOMATIC1111 diffusion workbook.
This is the code and it's just used to download elements from the web:
def load_element_from_web():
#@markdown # Set paths
#@markdown Enter the location you want to install each type of file into.
#@markdown
#@markdown *Note: If you're using TheLastBen's Colab, you won't have to change any of these.*
model_directory = "/content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion" #@param {type:"string"}
vae_directory = "/content/gdrive/MyDrive/sd/stable-diffusion-webui/models/VAE" #@param {type:"string"}
lora_directory = "/content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Lora" #@param {type:"string"}
hyper_network_directory = "/content/gdrive/MyDrive/sd/stable-diffusion-webui/models/hypernetworks" #@param {type:"string"}
textual_inversion_directory = "/content/gdrive/MyDrive/sd/stable-diffusion-webui/embeddings" #@param {type:"string"}
#@markdown ---
#@markdown # Load element
#@markdown Select the type of file you want to load and paste the direct link to the file in the input field below. The file will be named the same as the original file - if you want to rename it, you can do so in Colab's file browser.
#@markdown
#@markdown Run this cell to install it, then repeat for everything you'd like to load in.
file_type_to_dl = "LoRA" #@param ["Model", "VAE", "LoRA", "Hyper Network", "Textual Inversion"]
link_to_file = "https://civitai.com/api/download/models/14882" #@param {type:"string"}
def download(url, directory):
!wget {url} --content-disposition -P {directory}
print("Done!")
if(file_type_to_dl == "Model"):
print('Downloading model...')
download(link_to_file, model_directory)
elif(file_type_to_dl == "VAE"):
print('Downloading VAE...')
download(link_to_file, vae_directory)
elif(file_type_to_dl == "LoRA"):
print('Downloading LoRA...')
download(link_to_file, lora_directory)
elif(file_type_to_dl == "Hyper Network"):
print('Downloading hyper network...')
download(link_to_file, hyper_network_directory)
elif(file_type_to_dl == "Textual Inversion"):
print('Downloading textual inversion embedding...')
download(link_to_file, textual_inversion_directory)
load_element_from_web()
~ ~ ~ ~ ~ ~
Aside from that, I'm used a smaller code I found on Reddit from a guy that was having a different issue I was facing and that was actually fixed with this code:
!pip install -r /content/gdrive/MyDrive/sd/stable-diffusion-webui/requirements_versions.txt
!pip install open_clip_torch
!pip install git+https://github.com/openai/CLIP.git
!pip install xformers
~ ~ ~ ~ ~ ~
^^ Those are the main two things I'm using that are different from your Collab ^^ I have the 1st one inserted into a separate cell before the 'ControlNet' cell.... and the 2nd (shorter) one is inserted in a separate cell just before the 'Start Stable Diffusion' cell.
Whenever I go to start up the collab I restart my runtime (Usually on 'GPU' for hardware accelerator.) Then go down the page of cells and hit all of their play buttons before trying to use the 'Start Stable Diffusion' cell.
the issue is caused by !pip install -r /content/gdrive/MyDrive/sd/stable-diffusion-webui/requirements_versions.txt
which overwrites all the dependencies of the notebook.
to fix the lora models issue, I suggest creating subfolders in the lora folder and separating the lora models between them, that should prevent the issues, try avoiding any additional code to prevent breaking the notebook. Let me know how it goes.
Oh? Okay I'll not mess with that bit of the code then or just remove it entirely perhaps..... I think I had to use that though to fix something that wasn't working after I installed one new model from CivitAI. Without that bit, Stable Diffusion just wouldn't start for an entirely other reason. .. The
As for the folders, I'll try that, but I never wanted to mess much with adding folders because Stable Diffusion wouldn't work after I moved or touched anything directly in the files for some reason..... That's why I was using the 'Load Element From Web' Script I got from the discord mod... I was also not too sure that The notebook would be able to read/register that the LoRA models were separated into different folders.
I'll try both options though and get back to you here when I can!
UPDATE: I DID in fact get it to work btw! I apparently just have to restart the runtime once or twice on the notebook each time I get onto it! ..As for the LoRAs you mentioned.. I haven't yet started putting them in different folders and stuff, but That would essentially work?.. Like while running stable diffusion I would be able to read that they are there even when organized into additional folders? I've encountered minor issues like that before with other similar programs and all. xD
why do you have to restart the runtime ?
Just as the Title states, apparently I'm dealing with an 'InputError' of some kind here. This is just the newest in the long series of issues I've been encountering. Here's the full code it gives:
Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 21, in
import pytorch_lightning # pytorch_lightning should be imported after torch, but it re-enables warnings on import so import once to disable them
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/init.py", line 34, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/init.py", line 26, in
from pytorch_lightning.callbacks.pruning import ModelPruning
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/pruning.py", line 30, in
from pytorch_lightning.core.module import LightningModule
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/init.py", line 15, in
from pytorch_lightning.core.datamodule import LightningDataModule
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/datamodule.py", line 21, in
from pytorch_lightning.core.mixins import HyperparametersMixin
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/mixins/init.py", line 16, in
from pytorch_lightning.core.mixins.hparams_mixin import HyperparametersMixin # noqa: F401
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/mixins/hparams_mixin.py", line 20, in
from pytorch_lightning.core.saving import ALLOWED_CONFIG_TYPES, PRIMITIVE_TYPES
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/saving.py", line 34, in
from pytorch_lightning.utilities.migration import pl_legacy_patch
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/utilities/migration/init.py", line 15, in
from pytorch_lightning.utilities.migration.utils import migrate_checkpoint # noqa: F401
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/utilities/migration/utils.py", line 23, in
from lightning_fabric.utilities.imports import _IS_WINDOWS
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/init.py", line 23, in
from lightning_fabric.fabric import Fabric # noqa: E402
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/fabric.py", line 32, in
from lightning_fabric.plugins import Precision # avoid circular imports: # isort: split
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/plugins/init.py", line 14, in
from lightning_fabric.plugins.environments.cluster_environment import ClusterEnvironment
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/plugins/environments/init.py", line 20, in
from lightning_fabric.plugins.environments.xla import XLAEnvironment # noqa: F401
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/plugins/environments/xla.py", line 18, in
from lightning_fabric.accelerators.tpu import _XLA_AVAILABLE, TPUAccelerator
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/accelerators/init.py", line 18, in
from lightning_fabric.accelerators.tpu import TPUAccelerator # noqa: F401
File "/usr/local/lib/python3.9/dist-packages/lightning_fabric/accelerators/tpu.py", line 21, in
from lightning_utilities.core.imports import ModuleAvailableCache
ImportError: cannot import name 'ModuleAvailableCache' from 'lightning_utilities.core.imports' (/usr/local/lib/python3.9/dist-packages/lightning_utilities/core/imports.py)
I was hoping someone may know of a fix for this!