Closed microvswind closed 9 months ago
Could you try and update Steerable Motion and grab the latest workflow from here: https://github.com/banodoco/Steerable-Motion/blob/main/demo/creative_interpolation_example.json
Could you try and update Steerable Motion and grab the latest workflow from here: https://github.com/banodoco/Steerable-Motion/blob/main/demo/creative_interpolation_example.json
Thx for reply,updated steerable motin,i use the lastest workflow and these 3 models download form the link described in the wokflow: v3_sd15_mm.ckpt v3_sd15_sparsectrl_rgb.ckpt v3_sd15_adapter.ckpt and these 3 model i have had v1-5-pruned-emaonly.ckpt ip-adapter_sd15_light.safetensors sd1.5_clipvision_pytorch_model.bin
and the issue still
Could you try switching the IPAdapter model out for Ipadapter_plus?
Wait, are the two images different sizes? It looks like they are here:
If so, that's your problem!
If so, that's your problem!
NO, these img have same size
all the imgs are 512*512
Could you try switching the IPAdapter model out for Ipadapter_plus?
when i swith ipadater to this model, get more erorrs
It looks like you're not selecting the right IP-Apadapter model - check out the default names in my workflow - it should be ipadapter_plus - not faceid. This would explain the error as using FaceID models require a different node
I have also been having the same issue!
Error occurred when executing BatchCreativeInterpolation:
Error(s) in loading state_dict for ResamplerImport:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
I have been looking over this thread but have not figured out where I have gone wrong here. My images are the same size, and all my selections (except for AnimateDiff Loader) have populated their models correctly. I have v3_sd_mm.ckpt in my checkpoints folder but it does not show up listed in the possible model files for AnimateDiff. Does this perhaps have something to do with it?
It looks like you're not selecting the right IP-Apadapter model - check out the default names in my workflow - it should be ipadapter_plus - not faceid. This would explain the error as using FaceID models require a different node
Yeah,it wokrs when i switch to ipadater plus,thx so much
I have also been having the same issue!
Error occurred when executing BatchCreativeInterpolation: Error(s) in loading state_dict for ResamplerImport: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
I have been looking over this thread but have not figured out where I have gone wrong here. My images are the same size, and all my selections (except for AnimateDiff Loader) have populated their models correctly. I have v3_sd_mm.ckpt in my checkpoints folder but it does not show up listed in the possible model files for AnimateDiff. Does this perhaps have something to do with it?
I'm pretty sure it's because you have an incorrect model - I'm going to add a list of download links soon to help you validate. In the meantime, would compare the names in the original workflow to what you have.
It looks like you're not selecting the right IP-Apadapter model - check out the default names in my workflow - it should be ipadapter_plus - not faceid. This would explain the error as using FaceID models require a different node
Yeah,it wokrs when i switch to ipadater plus,thx so much
Glad to hear it!
Thanks for the response. I went back through my workflow and tried to make sure all the models were exactly as set. As far as I can tell they are but I am still running into the issue here.
What I have found is that swapping the IP Adapter into the IPAdapter for SD15 (not plus) allows the process to start but sadly I hit lowvram pretty quickly and everything goes to hell pretty quick.
Thinking I can just wait for your download links to double confirm that I have the correct models, I've likely just placed the wrong model in here!
@CalebRoenigk, how much VRAM do you have? It could be that it's working but it requires maybe 10gb for 3 images
12 GB! Ill run this again soon here and report back if I can't figure it out! Thanks for all your help!
On Tue, Jan 23, 2024 at 4:26 AM POM @.***> wrote:
@CalebRoenigk https://github.com/CalebRoenigk, how much VRAM do you have? It could be that it's working but it requires maybe 10gb for 3 images
— Reply to this email directly, view it on GitHub https://github.com/banodoco/Steerable-Motion/issues/24#issuecomment-1905840831, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQQMKRJXZH5XNFFEHYWQ6SLYP6M5NAVCNFSM6AAAAABBRW4UP6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBVHA2DAOBTGE . You are receiving this because you were mentioned.Message ID: @.***>
Error(s) in loading state_dict for ImageProjModelImport: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).
File "/mnt/workspace/ComfyUI/execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/mnt/workspace/ComfyUI/execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/mnt/workspace/ComfyUI/execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/mnt/workspace/ComfyUI/custom_nodes/steerable-motion/SteerableMotion.py", line 347, in combined_function model, = ipadapter_application.apply_ipadapter(ipadapter=ipadapter, model=model, weight=1.0, image=None, weight_type="original", File "/mnt/workspace/ComfyUI/custom_nodes/steerable-motion/imports/IPAdapterPlus.py", line 501, in apply_ipadapter self.ipadapter = IPAdapterImport( File "/mnt/workspace/ComfyUI/custom_nodes/steerable-motion/imports/IPAdapterPlus.py", line 241, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
512*512 and updated comfyui and all 1.5 models