johndpope / Emote-hack

Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)
https://github.com/johndpope/VASA-1-hack
173 stars 9 forks source link

Where is the reference net config?? #21

Closed murphytju closed 8 months ago

murphytju commented 8 months ago

model = FramesEncodingVAE(input_channels=3, latent_dim=256, img_size=cfg.data.train_height, reference_net=None).to(device)

Your reference net is None.

johndpope commented 8 months ago

I just pushed some code - I'm pretty sure its this

https://github.com/johndpope/Emote-hack/blob/main/configs/inference.yaml

I take another look tomorrow - those training stages are just basic place holders. They should resemble the ones from animateanyone - I made some notes on Readme.

Thanks for your attention

johndpope commented 8 months ago

last night came across this

v2 = sd 2.1 v1 = sd 1.5

https://github.com/kohya-ss/sd-scripts/blob/main/train_controlnet.py#L134

https://github.com/kohya-ss/sd-scripts/blob/f9317052edb4ab3b3c531ac3b28825ee78b4a966/library/model_util.py#L1079

https://github.com/johndpope/Emote-hack/issues/24

I create a ticket to do both.