I got train results for both stages 1 and 2. Inference stage one works but creates a video with the same frame for one second; the inference stage 2 module is not working. I tried python -m pipelines.animation_stage_2 --config configs/prompts/animation_stage_2.yaml. I set the config values correctly. It throws an import error, than I fixed it. I have this error:
from diffusers.pipeline_utils import DiffusionPipeline
loaded temporal unet's pretrained weights from outputs/train_stage_2-2023-12-22T08-59-53
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/AnimateAnyone-unofficial/pipelines/animation_stage_2.py", line 244, in <module>
run(args)
File "/workspace/AnimateAnyone-unofficial/pipelines/animation_stage_2.py", line 233, in run
main(args)
File "/workspace/AnimateAnyone-unofficial/pipelines/animation_stage_2.py", line 70, in main
unet = UNet3DConditionModel.from_pretrained_2d(config.pretrained_motion_unet_path, subfolder=None, unet_additional_kwargs=OmegaConf.to_container(inference_config.unet_additional_kwargs), specific_model=config.specific_motion_unet_model)
File "/workspace/AnimateAnyone-unofficial/models/unet.py", line 457, in from_pretrained_2d
raise RuntimeError(f"{config_file} does not exist")
RuntimeError: outputs/train_stage_2-2023-12-22T08-59-53/config.json does not exist
I got train results for both stages 1 and 2. Inference stage one works but creates a video with the same frame for one second; the inference stage 2 module is not working. I tried
python -m pipelines.animation_stage_2 --config configs/prompts/animation_stage_2.yaml
. I set the config values correctly. It throws an import error, than I fixed it. I have this error: