Closed murphytju closed 8 months ago
I just pushed some code - I'm pretty sure its this
https://github.com/johndpope/Emote-hack/blob/main/configs/inference.yaml
I take another look tomorrow - those training stages are just basic place holders. They should resemble the ones from animateanyone - I made some notes on Readme.
Thanks for your attention
last night came across this
v2 = sd 2.1 v1 = sd 1.5
https://github.com/kohya-ss/sd-scripts/blob/main/train_controlnet.py#L134
https://github.com/johndpope/Emote-hack/issues/24
I create a ticket to do both.
model = FramesEncodingVAE(input_channels=3, latent_dim=256, img_size=cfg.data.train_height, reference_net=None).to(device)
Your reference net is None.