mayuelala / FollowYourPose

[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
https://follow-your-pose.github.io/
MIT License
1.25k stars 90 forks source link

Training Code #40

Closed treesturn closed 1 year ago

treesturn commented 1 year ago

is the output config file from running the Training code :

TORCH_DISTRIBUTED_DEBUG=DETAIL accelerate launch \
    --multi_gpu --num_processes=8 --gpu_ids '0,1,2,3,4,5,6,7' \
    train_followyourpose.py \
    --config="configs/pose_train.yaml" 

supposed to be used in the Inference code?:

TORCH_DISTRIBUTED_DEBUG=DETAIL accelerate launch \
    --gpu_ids '0' \
    txt2video.py \
    --config="configs/pose_sample.yaml" \
    --skeleton_path="./pose_example/vis_ikun_pose2.mov"

the output config.yaml file doesnt seem to look like the pose_sample.yaml file that is supposed to be used in the Inference code.

mayuelala commented 1 year ago

Thanks for your attention. The pose_train.yaml is used for training,you could use this config for finetuning

treesturn commented 1 year ago

Hi thanks for the prompt reply! I have few questions: 1) What is the difference between the output config config.yaml file and pose_sample,yaml? 2) How do i use the output config file for finetuning, is there an example command?

thanks!

mayuelala commented 1 year ago

Hi

  1. there are some many differences between training yaml and pose_sample.yaml, such as learning rate and so on. I don't understand your meaning.
  2. you just follow my README.md and

TORCH_DISTRIBUTED_DEBUG=DETAIL accelerate launch \
    --multi_gpu --num_processes=8 --gpu_ids '0,1,2,3,4,5,6,7' \
    train_followyourpose.py \
    --config="configs/pose_train.yaml" 
```  run