frankchieng / ComfyUI_Aniportrait

Unofficial implementation of AniPortrait custom node in ComfyUI
50 stars 8 forks source link
comfyui comfyui-nodes comfyui-workflow

Updates:

① Implement the frame_interpolation to speed up generation

② Modify the current code and support chain with the VHS nodes, i just found that comfyUI IMAGE type requires the torch float32 datatype, and AniPortrait heavily used numpy of image unit8 datatype,so i just changed my mind from my own image/video upload and generation nodes to the prevelance SOTA VHS image/video upload and video combined nodes,it WYSIWYG and inteactive well and instantly render the result

U can contact me thr twitter_1twitter wechat_1 Weixin:GalaticKing

audio driven combined with reference image and reference video

截图 2024-08-30 12-04-53 audio2video workflow

raw video to pose video with reference image

pose2video

face reenacment

face_reenacment video2video workflow

This is unofficial implementation of AniPortrait in ComfyUI custom_node,cuz i have routine jobs,so i will update this project when i have time

Aniportrait_pose2video.json

Audio driven

face reenacment

you should run

git clone https://github.com/frankchieng/ComfyUI_Aniportrait.git

then run

pip install -r requirements.txt

download the pretrained models

StableDiffusion V1.5

sd-vae-ft-mse

image_encoder

wav2vec2-base-960h

download the weights:

denoising_unet.pth reference_unet.pth pose_guider.pth motion_module.pth audio2mesh.pt audio2pose.pt film_net_fp16.pt

./pretrained_model/
|-- image_encoder
|   |-- config.json
|   `-- pytorch_model.bin
|-- sd-vae-ft-mse
|   |-- config.json
|   |-- diffusion_pytorch_model.bin
|   `-- diffusion_pytorch_model.safetensors
|-- stable-diffusion-v1-5
|   |-- feature_extractor
|   |   `-- preprocessor_config.json
|   |-- model_index.json
|   |-- unet
|   |   |-- config.json
|   |   `-- diffusion_pytorch_model.bin
|   `-- v1-inference.yaml
|-- wav2vec2-base-960h
|   |-- config.json
|   |-- feature_extractor_config.json
|   |-- preprocessor_config.json
|   |-- pytorch_model.bin
|   |-- README.md
|   |-- special_tokens_map.json
|   |-- tokenizer_config.json
|   `-- vocab.json
|-- audio2mesh.pt
|-- audio2pose.pt
|-- denoising_unet.pth
|-- motion_module.pth
|-- pose_guider.pth
|-- reference_unet.pth
|-- film_net_fp16.pt

Tips : The intermediate audio file will be generated and deleted,the raw video to pose video with audio and pose2video mp4 file will be located in the output directory of ComfyUI the original uploaded mp4 video requires square size like 512x512, otherwise the result will be weird

I've updated diffusers from 0.24.x to 0.26.2,so the diffusers/models/embeddings.py classname of PositionNet changed to GLIGENTextBoundingboxProjection and CaptionProjection changed to PixArtAlphaTextProjection,you should pay attention to it and modify the corresponding python files like src/models/transformer_2d.py if you installed the lower version of diffusers