Closed Enyakk closed 1 year ago
I am having this issue too, except I couldnt even start training. Switching to an older version 3 commits ago solved my problems.
I am having this issue too, except I couldnt even start training. Switching to an older version 3 commits ago solved my problems.
I am having the same issue as @Enyakk . But I don't think the commit he referenced would break anything. Which hash did you revert to @Claxiz ?
I reverted from the latest big change, please check again
also make sure to post your configs
Was regular image captions used alongside the class caption/prompt/token setting? This may hint at further issues.
I reverted from the latest big change, please check again
It works again for me.
@devilismyfriend works again for me. However the Model Playground now is broken https://github.com/devilismyfriend/StableTuner/issues/72
Traceback (most recent call last):
File "C:\Users\moo\miniconda3\envs\ST\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "C:\Users\moo\miniconda3\envs\ST\lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 531, in _clicked
self._command()
File "E:\AIML\StableTuner\scripts\configuration_gui.py", line 1907, in <lambda>
self.play_toolbox_label = ctk.CTkLabel(self.playground_frame_subframe, text="Toolbox")
File "E:\AIML\StableTuner\scripts\configuration_gui.py", line 2271, in play_generate_image
with torch.autocast("cuda"), torch.inference_mode():
AttributeError: play_current_image
And my config is here if you want to check
{
"concepts": [
{
"instance_prompt": "zwx",
"class_prompt": "person",
"instance_data_dir": "E:/AIML/iloader/mooos_data",
"class_data_dir": "E:/AIML/dreamstall/train/class_images/person",
"flip_p": "",
"do_not_balance": 0,
"use_sub_dirs": 0
}
],
"sample_prompts": [],
"add_controlled_seed_to_sample": [],
"model_path": "runwayml/stable-diffusion-v1-5",
"vae_path": "stabilityai/sd-vae-ft-mse",
"output_path": "models/new_model",
"send_telegram_updates": 0,
"telegram_token": "",
"telegram_chat_id": "",
"resolution": "512",
"batch_size": "1",
"train_epocs": "5",
"mixed_precision": "fp16",
"use_8bit_adam": 1,
"use_gradient_checkpointing": 1,
"accumulation_steps": "1",
"learning_rate": "1e-6",
"warmup_steps": "0",
"learning_rate_scheduler": "constant",
"regenerate_latent_cache": 1,
"train_text_encoder": 1,
"with_prior_loss_preservation": 1,
"prior_loss_preservation_weight": "1.0",
"use_image_names_as_captions": 0,
"auto_balance_concept_datasets": 0,
"add_class_images_to_dataset": 0,
"number_of_class_images": "200",
"save_every_n_epochs": "5",
"number_of_samples_to_generate": "1",
"sample_height": "512",
"sample_width": "512",
"sample_random_aspect_ratio": 0,
"sample_on_training_start": 1,
"aspect_ratio_bucketing": 1,
"seed": "3434554",
"dataset_repeats": "50",
"limit_text_encoder_training": "",
"use_text_files_as_captions": 0,
"ckpt_version": null,
"convert_to_ckpt_after_training": 0,
"execute_post_conversion": 0,
"disable_cudnn_benchmark": 1,
"sample_step_interval": "500",
"conditional_dropout": "",
"clip_penultimate": 0,
"use_ema": 0,
"aspect_ratio_bucketing_mode": "Dynamic Fill",
"dynamic_bucketing_mode": "Duplicate",
"model_variant": "Regular",
"fallback_mask_prompt": "",
"attention": "xformers",
"batch_prompt_sampling": 0,
"shuffle_dataset_per_epoch": 1
}
One of the very recent changes after commit
cbe04acdf120aa4b5b1506e3120c421fb92df49a
wrecked the ability to train subjects in Dreambooth mode. I can A/B the repositories and there's a stark difference in likeness and quality. Possibly one of the CLIP changes?