Closed Bismuth209 closed 8 months ago
If in stage1, I think this is very abnormal.
what do you think might be the issue?
I'm directly using train_hack.py
on UBC images
Maybe you can overfit the training on a few videos first.
I'm getting same issues even on overfitting on a few images... like I trained for a few hours on 3 images and am still seeing same issues. is there any issue with the vae encoder decoder? I'm using clip-vit-base-patch32
for clip and the normal vae from stable diffusion 1.5
this is my config https://github.com/Bismuth209/AnimateAnyone-unofficial/blob/main/configs/training/v2/v2.2.yaml and my training script https://github.com/Bismuth209/AnimateAnyone-unofficial/blob/main/train.py
I'm sorry, I've never seen this before. I feel very awkward...
I'm sorry but I'm not sure what you mean by "this"
Is this normal?