Closed StupidGame closed 1 year ago
same here, I didn't have this issue before, it seems to show up recently, maybe due to system update?
looks like it couldn't find the library protobuf when trying to run. Try installing the library by adding the line "!pip install protobuf" somewhere, Ideally before installing the requirements (The line !pip install -r requirements.txt or something similar)
Note: without know which/what colab notebook you are trying to run, it would be harder to test and provide an exact fix.
@DKnight54 Setting items, etc. are tentative.
from google.colab import drive
drive.mount('/content/drive')
git_repo_path = "test/test" #@param {type:"string"}
token = "test" #@param {type:"string"}
%env TOKEN=$token
#@title
#残りのパッケージインストール
git_repo_url = "https://$$TOKEN@github.com/" + git_repo_path + ".git"
#@markdown * データセットがあるリポジトリのURL
!git clone $git_repo_url
!git clone https://github.com/kohya-ss/sd-scripts.git
!pip install lycoris_lora
!pip install wandb
!pip install xformers
!pip install protobuf
!pip install lion_pytorch
%cd sd-scripts
!pip install -r requirements.txt
%cd ..
from accelerate.utils import write_basic_config
write_basic_config()
pretrained_model_name_or_path = "StupidGame/AnyLoRA" #@param ["enryu43/anifusion_sd_unet_768", "hakurei/waifu-diffusion", "Nilaier/Waifu-Diffusers","CompVis/stable-diffusion-v1-4", "naclbit/trinart_stable_diffusion_v2,diffusers-115k", "naclbit/trinart_stable_diffusion_v2,diffusers-95k", "naclbit/trinart_stable_diffusion_v2,diffusers-60k"] {allow-input: true}
#@markdown * 学習元のDiffusersモデル、ckptどちらかの保存先を入力してください。
pretrained_model_is_v2 = False #@param {type:"boolean"}
#@markdown * 学習元のモデルがSDv2派生かどうか入力してください、
pretrained_model_resolution = "512x512" #@param ["512x512", "768x768"]
#@markdown * 学習元のモデルの学習サイズを選択してください
#@markdown ----
datasets_path = "/content/kohya-mydatasets/datasets/test.toml" #@param {type:"string"}
prompts_path = "/content/kohya-mydatasets/datasets/test.txt" #@param {type:"string"}
dream_booth_epochs = 20 #@param {type:"integer"}
#@markdown * 学習にかけるステップ数です
#@markdown * **元のDiffusers版やXavierXiao氏のStableDiffusion版とほぼ同じ学習を行うには、ステップ数を倍にしてください。**
#@markdown ----
learning_late = 3e-5 #@param {type:"number"}
#@markdown * 学習率です
#@markdown ----
dream_booth_model_ext = "safetensors" #@param ["pt", "ckpt", "safetensors"]
#@markdown * 保存する形式を指定してください。
dream_booth_new_model = "test"#@param {type:"string"}
#@markdown * 保存するファイル / フォルダーの名前を指定してください。
#@markdown ----
output_dir = "/content/drive/MyDrive/loras" #@param{type:"string"}
#@markdown * 保存するファイル / フォルダーの場所を指定してください。
#@markdown ----
network_dim = 8 #@param {type:"integer"}
#@markdown * 次元数
#@markdown ----
network_alpha = 32 #@param {type:"integer"}
#@markdown * しきい値
#@markdown ----
te_coef = 0.5 #@param {type:"number"}
#@markdown * テキストエンコーダーの学習率の係数
#@markdown ----
unet_coef = 1 #@param {type:"number"}
#@markdown * unetの学習率の係数
#@markdown ----
shutdown = True #@param {type:"boolean"}
#@markdown ----
dream_booth_new_model = dream_booth_new_model + "_" + str(learning_late)
output_dir = output_dir + "/" + dream_booth_new_model
conv_dim = "conv_dim=" + str(network_dim)
conv_alpha = "conv_alpha=" + str(network_alpha)
import os
import glob
import shutil
os.makedirs("output", exist_ok=True)
text_lr = learning_late * te_coef
unet_lr = learning_late * unet_coef
!accelerate launch --num_cpu_threads_per_process 12 sd-scripts/train_network.py \
--pretrained_model_name_or_path=$pretrained_model_name_or_path \
--dataset_config=$datasets_path \
--network_dim=$network_dim \
--network_alpha=$network_alpha \
--output_dir=$output_dir \
--lr_scheduler="cosine_with_restarts" \
--lr_scheduler_num_cycles=2 \
--text_encoder_lr=$text_lr \
--unet_lr=$unet_lr \
--output_name=$dream_booth_new_model \
--prior_loss_weight=1.0 \
--seed=42 \
--max_train_epochs=$dream_booth_epochs \
--optimizer_type="Lion" \
--optimizer_args "weight_decay=1e-1" "betas=0.9,0.99" \
--max_grad_norm=1.0 \
--mixed_precision='fp16'\
--xformers \
--gradient_checkpointing \
--save_precision='fp16' \
--sample_every_n_epochs=1 \
--sample_prompts=$prompts_path \
--save_model_as=$dream_booth_model_ext \
--cache_latents \
--bucket_no_upscale \
--log_with="wandb" \
--wandb_api_key="test" \
--network_module=lycoris.kohya \
--network_args $conv_dim $conv_alpha "algo=loha" \
--max_token_length=150 \
--logging_dir=logs \
--noise_offset=0.05 \
--scale_weight_norms=1.3 \
--adaptive_noise_scale=0.05 \
--clip_skip=2 \
--min_snr_gamma=5
if shutdown == True:
from google.colab import runtime
runtime.unassign()```
Recently, sdxl branch is merged to main branch. Perpahs that causes this error.
You can avoid this problem using old version of sd-scripts. Or you can run this command below after installing sd-scripts and its requirements. In my case, error disappeared.
!pip install --upgrade protobuf
!cp /usr/local/lib/python3.10/dist-packages/google/protobuf/internal/builder.py /content/
!pip install protobuf==3.19.6
!cp /content/builder.py /usr/local/lib/python3.10/dist-packages/google/protobuf/internal/
I'm not sure this may cause undesirable side-effect or not.
I refered to this web site. https://stackoverflow.com/questions/71759248/importerror-cannot-import-name-builder-from-google-protobuf-internal
@kohya-ss Oh dear. Requirements.txt is installing protobuf version 3.19.6, but when I check the conflict error messages, it throws that tensorflow needs protobuf==3.20.3.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow 2.13.0 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.19.6 which is incompatible. tensorflow 2.13.0 requires tensorboard<2.14,>=2.13, but you have tensorboard 2.10.1 which is incompatible. tensorflow-datasets 4.9.3 requires protobuf>=3.20, but you have protobuf 3.19.6 which is incompatible. tensorflow-metadata 1.14.0 requires protobuf<4.21,>=3.20.3, but you have protobuf 3.19.6 which is incompatible.
I think it may be a dependency conflict issue as apparently tensorboard 2.10.1 is installed, but tensorflow 2.13.0 requires tensorboard<2.14,>=2.13
@StupidGame, temp workaround until the requirements conflict is fixed is to have this in the install section of your notebook: ' !git clone $git_repo_url !git clone https://github.com/kohya-ss/sd-scripts.git !pip install lycoris_lora !pip install wandb !pip install xformers
!pip install lion_pytorch %cd sd-scripts !pip install -r requirements.txt !pip install tensorboard==2.13 !pip install protobuf==3.20.3 %cd .. '
Put the install tensorboard and protobuf with the correct versions after the line !pip install -r requirements.txt
That at least gets your script to the part where it fails because I didn't bother to put in any sort training data instead of the error you had.
The requirements.txt
assumes tensorflow==2.10.1, because the version is the last version which supports GPU on windows environment. So please install tensorflow==2.10.1 if there is no issue.
If it is needed to install another version of TensorFlow, the workaround will work.
I'd like to investigate how we can remove TensorFlow dependency in wd14 tagger.
@kohya-ss
It seems that wd14 tagger's repo provides onnx model for inference. Maybe we can replace TensorFlow dependency with onnxruntime
in wd14 tagger.
However, all wd14 taggers' onnx model uses inputs with a fixed shape [1, 448, 448, 3], which means batch_size
is locked to 1...
Re-export the wd14 tagger with dynamic shape is needed if we desire a larger batch size on onnx model...
It seems that wd14 tagger's repo provides onnx model for inference. Maybe we can replace TensorFlow dependency with
onnxruntime
in wd14 tagger.
Thank you for letting me know. It is nice! And also thank you the PR about this. I will check it sooner :)
@DKnight54 Thanks! It works!
I get an error when trying to learn using google colab. Error Description: