kijai / ComfyUI-EasyAnimateWrapper

ComfyUI wrapper nodes for EasyAnimateV3
Apache License 2.0
72 stars 1 forks source link

I did run it successfully but the results were very poor #2

Open DamienCz opened 1 month ago

DamienCz commented 1 month ago

I did run it successfully but the results were very poor, so I'm wondering if there might be something wrong with the parameters?

missing keys: 0;

unexpected keys: 96;

[] ['loss.discriminator.main.0.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.11.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.3.num_batches_tracked', 'loss.discriminator.main.3.running_mean', 'loss.discriminator.main.3.running_var', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.6.num_batches_tracked', 'loss.discriminator.main.6.running_mean', 'loss.discriminator.main.6.running_var', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.9.num_batches_tracked', 'loss.discriminator.main.9.running_mean', 'loss.discriminator.main.9.running_var', 'loss.discriminator.main.9.weight', 'loss.discriminator3d.blocks.0.conv1.bias', 'loss.discriminator3d.blocks.0.conv1.weight', 'loss.discriminator3d.blocks.0.conv2.bias', 'loss.discriminator3d.blocks.0.conv2.weight', 'loss.discriminator3d.blocks.0.downsampler.filt', 'loss.discriminator3d.blocks.0.norm1.bias', 'loss.discriminator3d.blocks.0.norm1.weight', 'loss.discriminator3d.blocks.0.norm2.bias', 'loss.discriminator3d.blocks.0.norm2.weight', 'loss.discriminator3d.blocks.0.shortcut.0.filt', 'loss.discriminator3d.blocks.0.shortcut.1.bias', 'loss.discriminator3d.blocks.0.shortcut.1.weight', 'loss.discriminator3d.blocks.1.conv1.bias', 'loss.discriminator3d.blocks.1.conv1.weight', 'loss.discriminator3d.blocks.1.conv2.bias', 'loss.discriminator3d.blocks.1.conv2.weight', 'loss.discriminator3d.blocks.1.downsampler.filt', 'loss.discriminator3d.blocks.1.norm1.bias', 'loss.discriminator3d.blocks.1.norm1.weight', 'loss.discriminator3d.blocks.1.norm2.bias', 'loss.discriminator3d.blocks.1.norm2.weight', 'loss.discriminator3d.blocks.1.shortcut.0.filt', 'loss.discriminator3d.blocks.1.shortcut.1.bias', 'loss.discriminator3d.blocks.1.shortcut.1.weight', 'loss.discriminator3d.blocks.2.conv1.bias', 'loss.discriminator3d.blocks.2.conv1.weight', 'loss.discriminator3d.blocks.2.conv2.bias', 'loss.discriminator3d.blocks.2.conv2.weight', 'loss.discriminator3d.blocks.2.norm1.bias', 'loss.discriminator3d.blocks.2.norm1.weight', 'loss.discriminator3d.blocks.2.norm2.bias', 'loss.discriminator3d.blocks.2.norm2.weight', 'loss.discriminator3d.blocks.2.shortcut.0.bias', 'loss.discriminator3d.blocks.2.shortcut.0.weight', 'loss.discriminator3d.conv_in.bias', 'loss.discriminator3d.conv_in.weight', 'loss.discriminator3d.conv_norm_out.bias', 'loss.discriminator3d.conv_norm_out.weight', 'loss.discriminator3d.conv_out.bias', 'loss.discriminator3d.conv_out.weight', 'loss.logvar', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.scaling_layer.scale', 'loss.perceptual_loss.scaling_layer.shift']

kijai commented 1 month ago

I'm getting that error too but still can get results, so that's probably not an issue. I have noticed the results are terrible with some resolutions, depending on the used model. Also the prompt affects it a lot, and as always with these kind of models: results just vary between seeds a lot.

DamienCz commented 1 month ago

I have asked on WeChat about the poor generation quality, and they said that they have noticed this problem and plan to launch their own comfyui node.

Jukka Seppänen @.***>于2024年7月11日 周四下午8:51写道:

I'm getting that error too but still can get results, so that's probably not an issue. I have noticed the results are terrible with some resolutions, depending on the used model. Also the prompt affects it a lot, and as always with these kind of models: results just vary between seeds a lot.

— Reply to this email directly, view it on GitHub https://github.com/kijai/ComfyUI-EasyAnimateWrapper/issues/2#issuecomment-2222859432, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7UCRBLNOHE42FEOFBVABB3ZLZ5W5AVCNFSM6AAAAABKWWCADSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRSHA2TSNBTGI . You are receiving this because you authored the thread.Message ID: @.***>

kijai commented 1 month ago

Yeah that's good, but I haven't had issues getting decent quality outputs, it's just matter of settings. And these nodes are just very basic implementation, by no means finished.

kijai commented 1 month ago

Some img2vid examples with the 768 model:

https://media.discordapp.net/attachments/1260702764250435584/1260708057231392829/AnimateDiff_00007_20.mp4?ex=6690f61f&is=668fa49f&hm=67cf038e3c559a553d1d8d39f80026bc845f7106d3ae4f7d325e1a4ffbcc01a0&

https://media.discordapp.net/attachments/1260702764250435584/1260724903661801522/AnimateDiff_00009_14.mp4?ex=669105d0&is=668fb450&hm=8a8eea07bd64cf692829a221b71c841ccb7bcfc39348811d4793d7e60a8b46d8&

DamienCz commented 1 month ago

I see,It's cool, and I can get good results occasionally, but it's still a little short of their official display.

Jukka Seppänen @.***>于2024年7月11日 周四下午9:01写道:

Some img2vid examples with the 768 model:

https://media.discordapp.net/attachments/1260702764250435584/1260708057231392829/AnimateDiff_00007_20.mp4?ex=6690f61f&is=668fa49f&hm=67cf038e3c559a553d1d8d39f80026bc845f7106d3ae4f7d325e1a4ffbcc01a0&

https://media.discordapp.net/attachments/1260702764250435584/1260724903661801522/AnimateDiff_00009_14.mp4?ex=669105d0&is=668fb450&hm=8a8eea07bd64cf692829a221b71c841ccb7bcfc39348811d4793d7e60a8b46d8&

— Reply to this email directly, view it on GitHub https://github.com/kijai/ComfyUI-EasyAnimateWrapper/issues/2#issuecomment-2222883053, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7UCRBPYE43HG3ADT6NQZ4DZLZ64FAVCNFSM6AAAAABKWWCADSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRSHA4DGMBVGM . You are receiving this because you authored the thread.Message ID: @.***>

DamienCz commented 1 month ago

Some img2vid examples with the 768 model:

https://media.discordapp.net/attachments/1260702764250435584/1260708057231392829/AnimateDiff_00007_20.mp4?ex=6690f61f&is=668fa49f&hm=67cf038e3c559a553d1d8d39f80026bc845f7106d3ae4f7d325e1a4ffbcc01a0&

https://media.discordapp.net/attachments/1260702764250435584/1260724903661801522/AnimateDiff_00009_14.mp4?ex=669105d0&is=668fb450&hm=8a8eea07bd64cf692829a221b71c841ccb7bcfc39348811d4793d7e60a8b46d8&

https://github.com/aigc-apps/EasyAnimate/blob/main/easyanimate/comfyui/README.md I asked the project leader to make a comfyui version, and they said they could reproduce the effects of the webui side in comfyui. Maybe you can refer to their modifications?