Used commit : 1904a01117824d7d294227059f6b684972c5a8b6
When I updated to the latest version and ran the script, I got the error below.
(animatediff) C:\Users\toyxy\AnimateDiff>python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
The config attributes {'scaling_factor': 0.18215} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
Some weights of the model checkpoint at models/StableDiffusion/stable-diffusion-v1-5 were not used when initializing AutoencoderKL: ['encoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_v.bias', 'encoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_v.bias', 'decoder.mid_block.attentions.0.to_v.weight', 'decoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_v.weight', 'encoder.mid_block.attentions.0.to_out.0.weight', 'decoder.mid_block.attentions.0.to_q.weight', 'decoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_out.0.weight', 'encoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_q.weight']
This IS expected if you are initializing AutoencoderKL from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing AutoencoderKL from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of AutoencoderKL were not initialized from the model checkpoint at models/StableDiffusion/stable-diffusion-v1-5 and are newly initialized: ['decoder.mid_block.attentions.0.query.weight', 'decoder.mid_block.attentions.0.query.bias', 'decoder.mid_block.attentions.0.proj_attn.bias', 'encoder.mid_block.attentions.0.query.weight', 'encoder.mid_block.attentions.0.proj_attn.bias', 'decoder.mid_block.attentions.0.proj_attn.weight', 'decoder.mid_block.attentions.0.value.bias', 'encoder.mid_block.attentions.0.value.weight', 'encoder.mid_block.attentions.0.key.bias', 'decoder.mid_block.attentions.0.key.weight', 'encoder.mid_block.attentions.0.query.bias', 'decoder.mid_block.attentions.0.value.weight', 'decoder.mid_block.attentions.0.key.bias', 'encoder.mid_block.attentions.0.proj_attn.weight', 'encoder.mid_block.attentions.0.key.weight', 'encoder.mid_block.attentions.0.value.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
loaded temporal unet's pretrained weights from models/StableDiffusion/stable-diffusion-v1-5\unet ...
The config attributes {'addition_embed_type': None, 'addition_embed_type_num_heads': 64, 'addition_time_embed_dim': None, 'class_embeddings_concat': False, 'conv_in_kernel': 3, 'conv_out_kernel': 3, 'cross_attention_norm': None, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'mid_block_only_cross_attention': None, 'num_attention_heads': None, 'projection_class_embeddings_input_dim': None, 'resnet_out_scale_factor': 1.0, 'resnet_skip_time_act': False, 'time_cond_proj_dim': None, 'time_embedding_act_fn': None, 'time_embedding_dim': None, 'time_embedding_type': 'positional', 'timestep_post_act': None, 'transformer_layers_per_block': 1} were passed to UNet3DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Traceback (most recent call last):
File "C:\Users\toyxy\anaconda3\envs\animatediff\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\toyxy\anaconda3\envs\animatediff\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\toyxy\AnimateDiff\scripts\animate.py", line 159, in
main(args)
File "C:\Users\toyxy\AnimateDiff\scripts\animate.py", line 53, in main
unet = UNet3DConditionModel.from_pretrained_2d(args.pretrained_model_path, subfolder="unet", unet_additional_kwargs=OmegaConf.to_container(inference_config.unet_additional_kwargs))
File "C:\Users\toyxy\AnimateDiff\animatediff\models\unet.py", line 484, in from_pretrained_2d
model = cls.from_config(config, unet_additional_kwargs)
File "C:\Users\toyxy\anaconda3\envs\animatediff\lib\site-packages\diffusers\configuration_utils.py", line 210, in from_config
model = cls(init_dict)
File "C:\Users\toyxy\anaconda3\envs\animatediff\lib\site-packages\diffusers\configuration_utils.py", line 567, in inner_init
init(self, *args, **init_kwargs)
File "C:\Users\toyxy\AnimateDiff\animatediff\models\unet.py", line 188, in init
raise ValueError(f"unknown mid_block_type : {mid_block_type}")
ValueError: unknown mid_block_type : UNetMidBlock2DCrossAttn
Used commit : 1904a01117824d7d294227059f6b684972c5a8b6
When I updated to the latest version and ran the script, I got the error below.
(animatediff) C:\Users\toyxy\AnimateDiff>python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Cannot initialize model with low cpu memory usage because
accelerate
was not found in the environment. Defaulting tolow_cpu_mem_usage=False
. It is strongly recommended to installaccelerate
for faster and less memory-intense model loading. You can do so with:. The config attributes {'scaling_factor': 0.18215} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file. Some weights of the model checkpoint at models/StableDiffusion/stable-diffusion-v1-5 were not used when initializing AutoencoderKL: ['encoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_v.bias', 'encoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_v.bias', 'decoder.mid_block.attentions.0.to_v.weight', 'decoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_v.weight', 'encoder.mid_block.attentions.0.to_out.0.weight', 'decoder.mid_block.attentions.0.to_q.weight', 'decoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_out.0.weight', 'encoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_q.weight']