Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.79k stars 209 forks source link

[Caused by pytorch version] KSampler error when using animatediff #100

Open Amber-Believe opened 1 year ago

Amber-Believe commented 1 year ago

There's something wrong with KSampler. The KSampler processing fails (which was normal before the animatediff was used) while the model etc loads properly. The error log and workflow are attached below workflow.json comfyui.log

Amber-Believe commented 1 year ago

The current python version is 3.10.13

Kosinkadink commented 1 year ago

Hey, the workflow you linked executes on my end (however, to get a good output, you'll need to increase batch_size to around 16, the sweetspot for AD, and a finetuned SD checkpoint can get even better results).

Can you try to restart ComfyUI and update both ComfyUI and AnimateDiff-Evolved just in case? The error you get suggests that it could not inject the motion module properly into the SD unet, but the checkpoint you are using is indeed a valid SD1.5 one. Just in case, also try with a different SD1.5 checkpoint.

Amber-Believe commented 1 year ago

After the update, it is still invalid

Kosinkadink commented 1 year ago

What happens if you attempt to use another SD1.5 checkpoint? And have you tried reinstalling AnimateDiff-Evolved repo?

Amber-Believe commented 1 year ago

Changed the model, still can't

Amber-Believe commented 1 year ago

And have you tried reinstalling AnimateDiff-Evolved repo? YES

Kosinkadink commented 1 year ago

Hmm, very odd, I'll look into some of the details of the error you're getting later today. In the meantime, can you list what OS you are running, how you installed ComfyUI, etc.?

jackylu97 commented 1 year ago

I'm seeing the same issue - I'm using Ubuntu and installed through the Comfy UI manager. I'm also on python 3.9. For whatever reason, reverting the code to previous versions also doesn't seem to fix the issue.

jackylu97 commented 1 year ago

I found a fix, it turns out the instructions on the main ComfyUI Github were wrong for my setup. Running this worked for me: pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu117

(I'm on cuda117)

Saranmt commented 1 year ago

I'm getting the same issue, it works fine goes to 95% and then when it ends I get the error instead of the images/video

Kosinkadink commented 1 year ago

@Saranmt what is the error you get? Please provide the console output

Saranmt commented 1 year ago

this is the error :

Error occurred when executing VAELoader:

invalid load key, '<'.

File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 587, in load_vae sd = comfy.utils.load_torch_file(vae_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 22, in load_torch_file pl_sd = torch.load(ckpt, map_location=device, pickle_module=comfy.checkpoint_pickle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1028, in load return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1246, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Queue size: 0 Extra options

Saranmt commented 1 year ago

this is the full log:

D:\Ai SD\Comfy UI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build ** ComfyUI start up time: 2023-10-24 09:52:43.382999

Prestartup times for custom nodes: 0.0 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 8192 MB, total RAM 32611 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 2070 : cudaMallocAsync VAE dtype: torch.float32 Using pytorch cross attention Adding extra search path checkpoints path/to/stable-diffusion-webui/models/Stable-diffusion Adding extra search path configs path/to/stable-diffusion-webui/models/Stable-diffusion Adding extra search path vae path/to/stable-diffusion-webui/models/VAE Adding extra search path loras path/to/stable-diffusion-webui/models/Lora Adding extra search path loras path/to/stable-diffusion-webui/models/LyCORIS Adding extra search path upscale_models path/to/stable-diffusion-webui/models/ESRGAN Adding extra search path upscale_models path/to/stable-diffusion-webui/models/RealESRGAN Adding extra search path upscale_models path/to/stable-diffusion-webui/models/SwinIR Adding extra search path embeddings path/to/stable-diffusion-webui/embeddings Adding extra search path hypernetworks path/to/stable-diffusion-webui/models/hypernetworks Adding extra search path controlnet path/to/stable-diffusion-webui/models/ControlNet Using pytorch cross attention

Loading: ComfyUI-Impact-Pack (V4.26.1)

Loading: ComfyUI-Impact-Pack (Subpack: V0.2.4)

Loading: ComfyUI-Manager (V0.36)

ComfyUI Revision: 1612 [2ec6158e] | Released on '2023-10-22'

Registered sys.path: ['D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\git\ext\gitdb', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\python311.zip', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\win32', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\win32\lib', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\Pythonwin', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules', 'D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack', '../..'] D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly") FizzleDorf Custom Nodes: Loaded [tinyterraNodes] Loaded D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\failfast-comfyui-extensions\extensions D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\web\extensions\failfast-comfyui-extensions

Import times for custom nodes: 0.0 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite 0.1 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\failfast-comfyui-extensions 0.1 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet 0.1 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes 0.1 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved 0.2 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux 0.7 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager 2.9 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes 6.7 seconds: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack

Starting server

To see the GUI go to: http://127.0.0.1:8188 FETCH DATA from: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json got prompt FETCH DATA from: D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'} left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids']) [AnimateDiffEvo] - INFO - Loading motion module mm-Stabilized_high.pth [AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16 Requested to load SD1ClipModel Loading 1 new model [] [] [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (132) greater than context_length 16. [AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_high.pth version v1. Requested to load BaseModel Requested to load ControlNet Loading 2 new models 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [11:42<00:00, 35.10s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm-Stabilized_high.pth version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. [AnimateDiffEvo] - INFO - Removing motion module mm-Stabilized_high.pth from cache D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:49: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 587, in load_vae sd = comfy.utils.load_torch_file(vae_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 22, in load_torch_file pl_sd = torch.load(ckpt, map_location=device, pickle_module=comfy.checkpoint_pickle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1028, in load return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai SD\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1246, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ _pickle.UnpicklingError: invalid load key, '<'.

Prompt executed in 936.37 seconds

Saranmt commented 1 year ago

@Kosinkadink

Just FYI Solved it on my side, the issue was the VAE, Instead of using a vae I input all vae to the main checkpoint (dreamshaper, which has VAE baked in) and no more errors!

Persite007 commented 1 year ago

Have just started a trial toonight and got the same error... SOLVED ! 1-Install animatediff from ComfyUI manager and restart ComfyUI 2-Update ComfyUI and restart ComfyUI Work perfectly on Windows10 with any sd1.5 checkpoint I tried only one sample: txt2img - 32 frames animation with 16 context_length (uniform) - PanLeft and ZoomOut Motion LoRAs.

https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/49327010/9a21dfb0-d76a-49d7-ba08-a5be15872c93 First real test. Except mistakes with red on lip and glossy neck, it is really good. https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/49327010/4ee9e727-4ca5-4e81-8bbe-d7cb6fe9ba9e

Thx @Kosinkadink for this great job!

KeithHanson commented 1 year ago

Also receiving this error. I'm on Ubuntu 20.04, using an Nvidia Tesla K80, and have installed my libs via conda. Relevant info:

(note: I can't go any higher as far as I can tell due to K80's not being included in future CUDA versions) NVIDIA-SMI 470.199.02 Driver Version: 470.199.02 CUDA Version: 11.4

conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

Error:

(py310) keith@ai-vm [08:08:55 PM] [~/ComfyUI] [master *]
-> % python main.py --listen 0.0.0.0
** ComfyUI start up time: 2023-10-25 20:09:00.325317

Prestartup times for custom nodes:
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 11441 MB, total RAM 15989 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla K80 : 
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[tinyterraNodes] Loaded
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Total VRAM 11441 MB, total RAM 15989 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla K80 : 
VAE dtype: torch.float32
Torch version: 1.11.0
FizzleDorf Custom Nodes: Loaded
### Loading: ComfyUI-Manager (V0.36)
### ComfyUI Revision: 1619 [7fbb217d] | Released on '2023-10-25'
Registered sys.path: ['/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_detectron2', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_oneformer', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_midas_repo', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_mmpkg', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_pycocotools', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/__init__.py', '/home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux/src', '/home/keith/ComfyUI/comfy', '/home/keith/miniconda3/envs/py310/lib/python3.10/site-packages/git/ext/gitdb', '/home/keith/ComfyUI', '/home/keith/miniconda3/envs/py310/lib/python310.zip', '/home/keith/miniconda3/envs/py310/lib/python3.10', '/home/keith/miniconda3/envs/py310/lib/python3.10/lib-dynload', '/home/keith/miniconda3/envs/py310/lib/python3.10/site-packages', '/home/keith/ComfyUI/custom_nodes/ComfyUI_NestedNodeBuilder', '../..']

Import times for custom nodes:
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI_NestedNodeBuilder
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
   0.0 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
   0.1 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI-Manager
   0.1 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI_FizzNodes
   0.2 seconds: /home/keith/ComfyUI/custom_nodes/comfyui_controlnet_aux
   0.4 seconds: /home/keith/ComfyUI/custom_nodes/ComfyUI_tinyterraNodes
   0.5 seconds: /home/keith/ComfyUI/custom_nodes/comfyui-reactor-node

Starting server

To see the GUI go to: http://0.0.0.0:8188
FETCH DATA from: /home/keith/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
model_type EPS
adm 0
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids', 'embedding_manager.embedder.transformer.text_model.embeddings.position_embedding.weight', 'embedding_manager.embedder.transformer.text_model.embeddings.position_ids', 'embedding_manager.embedder.transformer.text_model.embeddings.token_embedding.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.bias', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_up.weight', 'model_ema.decay', 'model_ema.num_updates'])
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v15_v2.ckpt

 Max Frames:  120 
 Current Prompt:  (Masterpiece, best quality:1.2) 12 years old boy in physical therapist clinic 
 Next Prompt:  (Masterpiece, best quality:1.2) 12 years old boy in physical therapist clinic 
 Strength :  1.0 

Requested to load SD1ClipModel
Loading 1 new model
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
[] []
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
Leftover VAE keys ['loss.discriminator.main.0.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.11.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.3.num_batches_tracked', 'loss.discriminator.main.3.running_mean', 'loss.discriminator.main.3.running_var', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.6.num_batches_tracked', 'loss.discriminator.main.6.running_mean', 'loss.discriminator.main.6.running_var', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.9.num_batches_tracked', 'loss.discriminator.main.9.running_mean', 'loss.discriminator.main.9.running_var', 'loss.discriminator.main.9.weight', 'loss.logvar', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.scaling_layer.scale', 'loss.perceptual_loss.scaling_layer.shift']
[AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (30) greater than context_length 16.
[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v15_v2.ckpt version v2.
[AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v15_v2.ckpt version v2.
[AnimateDiffEvo] - INFO - Removing motion module mm_sd_v15_v2.ckpt from cache
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/keith/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/keith/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/keith/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/keith/ComfyUI/nodes.py", line 1237, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "/home/keith/ComfyUI/nodes.py", line 1207, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 146, in animatediff_sample
    inject_motion_module(model=model, motion_module=motion_module, params=params)
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module.py", line 200, in inject_motion_module
    injectors[params.injector](model, motion_module)
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module.py", line 235, in _inject_motion_module_to_unet
    unet.output_blocks[unet_idx].insert(
  File "/home/keith/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TimestepEmbedSequential' object has no attribute 'insert'

Prompt executed in 79.07 seconds

Pip List:

(py310) keith@ai-vm [08:15:31 PM] [~/ComfyUI] [master *]
-> % pip list
Package                Version
---------------------- ------------
accelerate             0.24.0
aiohttp                3.8.6
aiosignal              1.3.1
albumentations         1.3.1
async-timeout          4.0.3
attrs                  23.1.0
Brotli                 1.0.9
certifi                2023.7.22
cffi                   1.15.1
charset-normalizer     2.0.4
coloredlogs            15.0.1
contourpy              1.1.1
cryptography           41.0.3
cycler                 0.12.1
Cython                 3.0.4
easydict               1.11
einops                 0.7.0
filelock               3.12.4
flatbuffers            23.5.26
fonttools              4.43.1
frozenlist             1.4.0
fsspec                 2023.10.0
gitdb                  4.0.11
GitPython              3.1.40
huggingface-hub        0.17.3
humanfriendly          10.0
idna                   3.4
imageio                2.31.6
insightface            0.7.3
joblib                 1.3.2
kiwisolver             1.4.5
lazy_loader            0.3
llvmlite               0.41.1
matplotlib             3.8.0
mpmath                 1.3.0
multidict              6.0.4
networkx               3.2
numba                  0.58.1
numexpr                2.8.7
numpy                  1.26.1
onnx                   1.14.1
onnxruntime            1.16.1
onnxruntime-gpu        1.16.1
opencv-python-headless 4.8.1.78
packaging              23.2
pandas                 2.1.1
Pillow                 10.0.1
pip                    23.3.1
platformdirs           3.11.0
pooch                  1.8.0
prettytable            3.9.0
protobuf               3.20.3
psutil                 5.9.6
pycparser              2.21
PyMatting              1.1.10
pyOpenSSL              23.2.0
pyparsing              3.1.1
PySocks                1.7.1
python-dateutil        2.8.2
pytz                   2023.3.post1
PyYAML                 6.0.1
qudida                 0.0.4
regex                  2023.10.3
rembg                  2.0.51
requests               2.31.0
safetensors            0.4.0
scikit-image           0.22.0
scikit-learn           1.3.2
scipy                  1.11.3
setuptools             68.0.0
six                    1.16.0
smmap                  5.0.1
sympy                  1.12
threadpoolctl          3.2.0
tifffile               2023.9.26
tokenizers             0.14.1
torch                  1.11.0
torchaudio             0.11.0
torchsde               0.2.6
torchvision            0.12.0
tqdm                   4.66.1
trampoline             0.1.2
transformers           4.34.1
typing_extensions      4.7.1
tzdata                 2023.3
urllib3                1.26.18
wcwidth                0.2.8
wheel                  0.41.2
yarl                   1.9.2
(py310) keith@ai-vm [08:15:33 PM] [~/ComfyUI] [master *]
-> % 

Workflow:

debugging.json

Checkpoint: https://civitai.com/models/134442/helloyoung25d

VAE: https://huggingface.co/AIARTCHAN/aichan_blend/blob/main/vae/BerrysMix.vae.safetensors

ControlNet: lineart.pth and yaml https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

Kosinkadink commented 1 year ago

@KeithHanson hey, I think you have the same issue as peeps whose problem was that pytorch needed to be a higher version. If updating pytorch/trying out diff versions is literally impossible on your machine, let me know, and I can look into if there exist alternate methods to do the unet injection that's necessary for AD to work.

KeithHanson commented 1 year ago

@Kosinkadink

If updating pytorch/trying out diff versions is literally impossible on your machine

it is literally impossible to go higher than pytorch v1.12.1 due to the Nvidia K80 being locked to CUDA 11.4 (see here) - if you scroll up on that link, you'll see that all future versions of torch support only CUDA 11.7 and up, and Nvidia has decided to not support it.

There IS a kind soul who has produced these binaries with support baked in up to 1.13 iirc: https://github.com/nelson-liu/pytorch-manylinux-binaries - though I haven't tried it yet.

If you know roughly which features are being used to point me in the direction (I'm new to ai art, but not a new developer), I can probably nail this down to a very specific version we can pin to and work towards.

I was tinkering last night in the code, and managed to put a few try/catch scenarios to emulate what you're doing with the unet insert/append and pop. But... it's a little annoying for the noob in this world... maybe you can help me understand what your range of numbers are doing or expecting for those unet layers?

It is obviously not the same on my machine :sweat_smile:

Specifically, the problems are in the two methods starting here: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/blob/b41b7af6bb0435212519ef21c13b69e1e8dd8fdd/animatediff/motion_module.py#L224

I slapped some try/excepts in there to match what you're trying to do with your mod and division operations (:scream:), but obviously that just eliminates the exception. Getting past that error ends up tripping up something else down the line that is expecting layers to be different I suspect but I was tired and fell asleep during my spelunking :sweat_smile:

I'll report back on a higher version error'ing or not, as well as give those other binaries a shot. I should be able to get to 1.13.1 that way, but through all my searching thus far, that is THE highest version this budget K80 can support :grin:

Thanks for the

KeithHanson commented 1 year ago

@Kosinkadink

If updating pytorch/trying out diff versions is literally impossible on your machine

it is literally impossible to go higher than pytorch v1.12.1 due to the Nvidia K80 being locked to CUDA 11.4 (see here) - if you scroll up on that link, you'll see that all future versions of torch support only CUDA 11.7 and up, and Nvidia has decided to not support it.

Hm. New error (in control_net_aux - so not related to you) once I've upgraded to pytorch 1.12.1, python 3.10, and I've confirmed that I can't go any higher than this even with those binaries.

The only reason I'm putting so much effort into this is that these cards are anywhere from $100 - $300, provide TWO cards with 12GB of vram, and need about 150W per card (300W at full tilt).

If I can get a stable environment setup, doing things like video stuff overnight while I'm unconscious is no big deal :sweat_smile:

I will likely end up getting a beefier modern card, or at least something that can handle CUDA 11.7 and upgrade to pytorch 2.x, but for the moment, I'd love to help make this work if at all possible, though I'd need some direction on how to support you :)

At the least, it would be very nice to know the version boundary we cannot go below to use this.

KeithHanson commented 1 year ago

Ok, last bit of info before I call it quits on my spelunking. Hopefully this is helpful!

I've set up the simplest layout I can see exactly from your repo: https://user-images.githubusercontent.com/7365912/271732607-b1374343-7b86-453f-b6f5-9717fd8b09aa.png

I have downloaded the EXACT models used in that workflow, and ensured every single setting is identical before posting this information.

NVIDIA-SMI 470.199.02 Driver Version: 470.199.02 CUDA Version: 11.4

(py310) keith@ai-vm [09:33:27 AM] [~/ComfyUI] [master *]
-> % pip list
Package                Version
---------------------- ------------------
absl-py                2.0.0
accelerate             0.24.0
addict                 2.4.0
aiohttp                3.8.6
aiosignal              1.3.1
albumentations         1.3.1
antlr4-python3-runtime 4.9.3
async-timeout          4.0.3
attrs                  23.1.0
beautifulsoup4         4.12.2
blessed                1.20.0
Brotli                 1.0.9
bs4                    0.0.1
cchardet               2.1.7
certifi                2023.7.22
cffi                   1.15.1
chardet                5.2.0
charset-normalizer     2.0.4
colorama               0.4.6
coloredlogs            15.0.1
contourpy              1.1.1
cryptography           41.0.3
cssselect2             0.7.0
cycler                 0.12.1
Cython                 3.0.4
easydict               1.11
einops                 0.7.0
emoji                  2.8.0
filelock               3.12.4
flatbuffers            23.5.26
fonttools              4.43.1
frozenlist             1.4.0
fsspec                 2023.10.0
ftfy                   6.1.1
fvcore                 0.1.5.post20221221
gitdb                  4.0.11
GitPython              3.1.40
huggingface-hub        0.17.3
humanfriendly          10.0
idna                   3.4
imageio                2.31.6
importlib-metadata     6.8.0
inquirer               3.1.3
insightface            0.7.3
iopath                 0.1.10
joblib                 1.3.2
kiwisolver             1.4.5
lazy_loader            0.3
llvmlite               0.41.1
lxml                   4.9.3
markdown-it-py         3.0.0
matplotlib             3.8.0
mdurl                  0.1.2
mediapipe              0.10.7
mpmath                 1.3.0
multidict              6.0.4
networkx               3.2
numba                  0.58.1
numexpr                2.8.7
numpy                  1.26.1
omegaconf              2.3.0
onnx                   1.14.1
onnxruntime            1.16.1
onnxruntime-gpu        1.16.1
opencv-contrib-python  4.8.1.78
opencv-python          4.8.1.78
opencv-python-headless 4.8.1.78
packaging              23.2
pandas                 2.1.1
Pillow                 10.0.1
pip                    23.3.1
platformdirs           3.11.0
pooch                  1.8.0
portalocker            2.8.2
prettytable            3.9.0
prompt-toolkit         3.0.39
protobuf               3.20.3
psutil                 5.9.6
pycparser              2.21
Pygments               2.16.1
PyMatting              1.1.10
pyOpenSSL              23.2.0
pyparsing              3.1.1
PySocks                1.7.1
python-dateutil        2.8.2
python-editor          1.0.4
pytz                   2023.3.post1
PyYAML                 6.0.1
qudida                 0.0.4
readchar               4.0.5
regex                  2023.10.3
rembg                  2.0.51
reportlab              4.0.6
requests               2.31.0
rich                   13.6.0
safetensors            0.4.0
scikit-image           0.22.0
scikit-learn           1.3.2
scipy                  1.11.3
setuptools             68.0.0
simpleeval             0.9.13
six                    1.16.0
smmap                  5.0.1
sounddevice            0.4.6
soupsieve              2.5
svglib                 1.5.1
sympy                  1.12
tabulate               0.9.0
termcolor              2.3.0
threadpoolctl          3.2.0
tifffile               2023.9.26
timm                   0.6.13
tinycss2               1.2.1
tokenizers             0.14.1
tomli                  2.0.1
torch                  1.12.1
torchaudio             0.12.1
torchsde               0.2.6
torchvision            0.13.1
tqdm                   4.66.1
trampoline             0.1.2
transformers           4.34.1
typing_extensions      4.7.1
tzdata                 2023.3
urllib3                1.26.18
wcwidth                0.2.6
webencodings           0.5.1
wheel                  0.41.2
yacs                   0.1.8
yapf                   0.40.2
yarl                   1.9.2
zipp                   3.17.0

Conda setup: conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

Models downloaded:

(py310) keith@ai-vm [09:34:23 AM] [~/ComfyUI] [master *]
-> % wget https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt
(py310) keith@ai-vm [09:34:37 AM] [~/ComfyUI] [master *]
-> % wget https://huggingface.co/autismanon/modeldump/resolve/d33c452486ebd6ffc282212fc9db635e58e11917/cardosAnime_v20.safetensors

Error:

got prompt
Requested to load SD1ClipModel
Loading 1 new model
[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1.
[AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1.
[AnimateDiffEvo] - INFO - Removing motion module mm_sd_v14.ckpt from cache
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/keith/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/keith/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/keith/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/keith/ComfyUI/nodes.py", line 1237, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "/home/keith/ComfyUI/nodes.py", line 1207, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 146, in animatediff_sample
    inject_motion_module(model=model, motion_module=motion_module, params=params)
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module.py", line 200, in inject_motion_module
    injectors[params.injector](model, motion_module)
  File "/home/keith/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module.py", line 235, in _inject_motion_module_to_unet
    unet.output_blocks[unet_idx].insert(
  File "/home/keith/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TimestepEmbedSequential' object has no attribute 'insert'

Prompt executed in 1.21 seconds
fpcarva commented 10 months ago

@Kosinkadink

Just FYI Solved it on my side, the issue was the VAE, Instead of using a vae I input all vae to the main checkpoint (dreamshaper, which has VAE baked in) and no more errors!

@Saranmt Can you show with images what you changed in terms of prompt, files or code to make it work? I'm stuck with this error and I don't think anyone has managed to solve it