TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.56k stars 1.31k forks source link

Training on a custom (huggingface) model is broken #1345

Open flesler opened 1 year ago

flesler commented 1 year ago

I tried several different base models based on 1.5. Pasted the following in Path_to_HuggingFace, no path or link. 1.5 selected as custom model version:

All of them crash when it gets to training the unet, I get:

Training the UNet...
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 522, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.8/dist-packages/diffusers/modeling_utils.py", line 388, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named diffusion_pytorch_model.bin found in directory /content/stable-diffusion-custom.
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=1000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal/instance_images', '--output_dir=/content/models/jmilei-photoreal', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal/captions', '--instance_prompt=', '--seed=643601', '--resolution=768', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=3e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.

I tried to patch it by copying the stuff from /unet/ to the parent as it expected. Still then got this other error and rage-quitted

Training the UNet...
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 522, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.8/dist-packages/diffusers/modeling_utils.py", line 451, in from_pretrained
    model, unused_kwargs = cls.from_config(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 202, in from_config
    model = cls(**init_dict)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 516, in inner_init
    init(self, *args, **init_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 544, in __init__
    self.encoder = Encoder(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 94, in __init__
    down_block = get_down_block(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py", line 67, in get_down_block
    raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D")
ValueError: cross_attention_dim must be specified for CrossAttnDownBlock2D
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=1000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3/instance_images', '--output_dir=/content/models/jmilei-v3-protogen5.3', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3/captions', '--instance_prompt=', '--seed=425318', '--resolution=512', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=4e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.
TheLastBen commented 1 year ago

does the problem persist ?

Omenizer commented 1 year ago

Yes, I just tried 2 merged models that didn't work. Base SD 1.5 works.

Models were loaded from gdrive, SD from hf

Converting to Diffusers ... Traceback (most recent call last): File "/content/convertodiff.py", line 1115, in convert(args) File "/content/convertodiff.py", line 1066, in convert text_encoder, vae, unet = load_models_from_stable_diffusion_checkpoint(v2_model, args.model_to_load) File "/content/convertodiff.py", line 835, in load_models_from_stable_diffusion_checkpoint checkpoint = load_checkpoint_with_text_encoder_conversion(ckpt_path) File "/content/convertodiff.py", line 816, in load_checkpoint_with_text_encoder_conversion checkpoint = torch.load(ckpt_path, map_location="cuda") File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 795, in load return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1002, in _legacy_load magic_number = pickle_module.load(f, pickle_load_args) _pickle.UnpicklingError: invalid load key, '\x9f'. rm: cannot remove '/content/stable-diffusion-custom': No such file or directory Conversion error

TheLastBen commented 1 year ago

some models require a specific python version, but most of them work fine, do you have a link to the model ?

Omenizer commented 1 year ago

Yesterday I tried many of the newer Civitai models, which are typically merges - didn't work.

I'm currently dreamboothing https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/model.ckpt which works. I'll post a link to a model that doesn't work later!

TheLastBen commented 1 year ago

I noticed that civitai merged models don't work, but huggingface version works

Omenizer commented 1 year ago

But I think that 2 days ago they did work?

Omenizer commented 1 year ago

Ok this is weird, I successfully trained 22h ckpt, but now it fails to load after training?

Use_localtunnel:

Only if you have trouble connecting to Gradio server LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Failed to create model quickly; will retry using slow method. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. loading stable diffusion model: OSError Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 74, in initialize modules.sd_models.load_model() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 345, in load_model sd_model = instantiate_from_config(sd_config.model) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/encoders/modules.py", line 99, in init self.tokenizer = CLIPTokenizer.from_pretrained(version) File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1761, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Stable diffusion model failed to load, exiting

TheLastBen commented 1 year ago

your sd folder is out of date, rename it and try again

Omenizer commented 1 year ago

Yes, managed to load the trained model now!

flesler commented 1 year ago

some models require a specific python version, but most of them work fine, do you have a link to the model ?

This one reproduced it for me @TheLastBen https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 I included more above

Omenizer commented 1 year ago

That's funny because I have a Dreamlike Photoreal 2.0 dreambooth from 2 days ago in my google drive, created in @TheLastBen colab! So something must have broken in the meantine

flesler commented 1 year ago

I'm doing a test run right now with the latest version

flesler commented 1 year ago

It blows up with this right now, both at the text encoder and then the unet

File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 726, in main
    accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 920, in clip_grad_norm_
    self.unscale_gradients()
  File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 904, in unscale_gradients
    self.scaler.unscale_(opt)
  File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 282, in unscale_
    optimizer_state["found_inf_per_device"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)
  File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 210, in _unscale_grads_
    raise ValueError("Attempting to unscale FP16 gradients.")
ValueError: Attempting to unscale FP16 gradients.
  0% 0/4999 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=2000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0/instance_images', '--output_dir=/content/models/jmilei-v2-photoreal2.0', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0/captions', '--instance_prompt=', '--seed=48872', '--resolution=768', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=4e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.
Something went wrong

Different error though. Using Path_to_HuggingFace: dreamlike-art/dreamlike-photoreal-2.0, images of 768x768 (chosen in both places). Some text encoding

TheLastBen commented 1 year ago

for https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 use the ckpt, not the diffusers, the diffusers model is in fp16 which doesn't work with dreambooth

Miscend commented 1 year ago

I had success with dreamlike-photoreal-2.0 when I put the CKPT file inside my gdrive.

Whenever I pasted the Hugging Face path, dreamlike-art/dreamlike-photoreal-2.0, I would get a crash.

TheLastBen commented 1 year ago

yes because the diffusers model published in their hf repo is in fp16 which is not the right type for dreambooth

flesler commented 1 year ago

@TheLastBen For the fp16 bit, is it possible to know without downloading? I expected the size to reflect that (like ~2GB for fp16, >4GB otherwise). Both both the .bin and the .ckpt are ~2GB and yet the CKPT is fine.

Also, if not, would it be easy to check if fp16 at the downloading stage? So that we don't get all the way to the bottom and create useless directories before realizing it's wrong

flesler commented 1 year ago

BTW, I see the notebook is replacing the VAE with 1.5's for custom models, don't they sometimes have their own custom VAE (and rarely none) and actually count on it not to change?

TheLastBen commented 1 year ago

the fp32 unet is 3.2 GB not 1.6, the majority of repos have the diffusers model in fp32, dreamlike is an exception, I don't know why.

as for the vae, I will change that soon for custom models since most of them now include the improved vae

Omenizer commented 1 year ago

Can I convert a fp16 ckpt to 32 for dreambooth if that's all I have?

TheLastBen commented 1 year ago

you can convert it to ckpt, then use the ckpt as base for training

metzo007 commented 1 year ago

I did try to use protogenV22Anime_22.ckpt as custum model, I dont get any message but in the next cell it says "No model found, use the "Model Download" cell to download a model.?

CKPT_Link: /content/gdrive/MyDrive/model/protogenV22Anime_22.ckpt

Ii I use the CKPT_Path I also get this error message

Converting to Diffusers ... Traceback (most recent call last): File "/content/convertodiff.py", line 1115, in convert(args) File "/content/convertodiff.py", line 1068, in convert pipe = StableDiffusionPipeline.from_pretrained(args.model_to_load, torch_dtype=load_dtype, tokenizer=None, safety_checker=None) File "/usr/local/lib/python3.8/dist-packages/diffusers/pipeline_utils.py", line 482, in from_pretrained config_dict = cls.load_config(cached_folder) File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 312, in load_config raise EnvironmentError( OSError: Error no file named model_index.json found in directory /content/gdrive/MyDrive/model/. rm: cannot remove '/content/stable-diffusion-custom': No such file or directory

https://civitai.com/models/3627/protogen-v22-anime-official-release

Omenizer commented 1 year ago

I also had no luck dreamboothing Protogen :(

flesler commented 1 year ago

Protogen 3.whatever worked for me using the huggingface path, the two times I tried. There are clearly many different errors going on here. Maybe this needs a ticket for each or w/e

TheLastBen commented 1 year ago

don't use civitai, use huggingface https://huggingface.co/darkstorm2150, the huggingface models work fine

metzo007 commented 1 year ago

Thanks. Sadly It took me 6h to upload the 4GB file, and now I need to upload a 6GB file :-(

Omenizer commented 1 year ago

From where to where?

You realize the colab can dl straight from hf and civitai?

You can also use the DL model colab to do transfer models

metzo007 commented 1 year ago

Thanks. Can youm tell me how this works?

It it as simple like putting this line into the Colab?

https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/blob/main/Protogen_V2.2-pruned-fp16.ckpt

flesler commented 1 year ago

That one shouldn't work, it's fp16. Protogen worked for me by putting the Huggingface Path instead (darkstorm2150/Protogen_v2.2_Official_Release)

Omenizer commented 1 year ago

https://colab.research.google.com/drive/1aQ5nXTfLWHhZi7GOfXteLKg5OT1X-aBZ

metzo007 commented 1 year ago

If you put in the path, how does the colab know which ckpt to use? Also "darkstorm2150/Protogen_v2.2_Official_Release" looks a bit short. How does the the collab even know that this is from huggingface?

But including the link from Omenizer, I can try 3 options later today

  1. direkt link https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/blob/main/Protogen_V2.2.ckpt 2.model dir (darkstorm2150/Protogen_v2.2_Official_Release) 3.Copy to drive with colab

Aditional Question. Is fp16 = pruned. Will pruned ckpts never work with dreambooth?

Omenizer commented 1 year ago

fp16 and pruned if not the same. Both are different ways to discard unimportant data afaik

TheLastBen commented 1 year ago

If it's a ckpt it will work even if in fp16

flesler commented 1 year ago

Got this error trying a ckpt from civitai (it's not on huggingface). I dowloaded to GDrive first and linked to the path:

Here's the ckpt URL: https://civitai.com/api/download/models/1292?type=Model&format=PickleTensor This is the model's page (V3 Alpha): https://civitai.com/models/1102/synthwavepunk

Converting to Diffusers ...
Traceback (most recent call last):
  File "/content/convertodiff.py", line 1115, in <module>
    convert(args)
  File "/content/convertodiff.py", line 1066, in convert
    text_encoder, vae, unet = load_models_from_stable_diffusion_checkpoint(v2_model, args.model_to_load)
  File "/content/convertodiff.py", line 847, in load_models_from_stable_diffusion_checkpoint
    info = unet.load_state_dict(converted_unet_checkpoint)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
    Missing key(s) in state_dict: "up_blocks.0.upsamplers.0.conv.weight", "up_blocks.0.upsamplers.0.conv.bias", "up_blocks.1.upsamplers.0.conv.weight", "up_blocks.1.upsamplers.0.conv.bias", "up_blocks.2.upsamplers.0.conv.weight", "up_blocks.2.upsamplers.0.conv.bias". 
    Unexpected key(s) in state_dict: "up_blocks.0.attentions.2.conv.bias", "up_blocks.0.attentions.2.conv.weight". 
rm: cannot remove '/content/stable-diffusion-custom': No such file or directory

Any idea @TheLastBen ?

Omenizer commented 1 year ago

If it's about this specific model you can try the 50/50 merge yourself, maybe the result can be trained?

Omenizer commented 1 year ago

The model is on HF too, original and merges

flesler commented 1 year ago

The V3 alpha isn't. That's the one I'm REALLY looking to try :)

Omenizer commented 1 year ago

He posted the recipe, just do the merge yourself

Miscend commented 1 year ago

I get this error when using dreamlike-photoreal-2.0

Does it have any effect of the performance of the training?

Converting to Diffusers ... Downloading config.json100% 4.41k/4.41k [00:00<00:00, 5.74MB/s] Downloading pytorch_model.bin100% 1.59G/1.59G [00:24<00:00, 69.2MB/s] Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'text_projection.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'logit_scale', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'visual_projection.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.weight']

TheLastBen commented 1 year ago

It's not an error, it's just an info message, if it doesn't write conversion error at the end, then the model is converted correctly.