PKU-YuanGroup / Open-Sora-Plan

This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
MIT License
11.6k stars 1.03k forks source link

resolution and ratio #510

Open junsukha opened 4 weeks ago

junsukha commented 4 weeks ago

Hi,

I'm trying to fine-tune the model with a specific resolution and ratio. I see that --max_height and --max_width should be divisible by 8 (I'm uisng "--ae=WFVAEModel_D8_4x8x8"). Given that, are there specific ratio or resolution to be used for training to get the best generated output of the resolution?

For example, I'm currently using the least batch size (i.e. 1 per gpu) to fit my gpu vram to use the largest resolution. As videos are of 16:9 ratio usually, I'm trying the resolutions that satisfy (16k, 9k) where k is divisible by 8. I was wondering whether the resolution I use for training affect the generated output videos. For inference, I will use the same resolution I used for training, i.e. (16k, 9k)

I'm fine-tuning on v1.3 image

Below is the arguments I used

            "args": [
            "--config_file", "scripts/accelerate_configs/deepspeed_zero2_config.yaml", 
            "opensora/train/train_t2v_diffusers.py",
            "--model=OpenSoraT2V_v1_3-2B/122",
            "--text_encoder_name_1=/mnt/singularity_home/jsha/repos/Open-Sora-Plan/weights/google/mt5-xxl",
            "--cache_dir=../../cache_dir/",
            "--dataset=t2v",
            "--data=/mnt/singularity_home/jsha/repos/Open-Sora-Plan/open_sora_plan_dummy_data/training/data.txt",
            "--ae=WFVAEModel_D8_4x8x8",
            "--ae_path", "/gpfs/vision/drag_video/HF_downloads/Open-Sora-Plan-v1.3.0/vae",
            "--sample_rate", "1",
            "--num_frames", "33",
            "--max_height", "648", 
            "--max_width", "1152", 
            "--interpolation_scale_t", "1.0" ,
            "--interpolation_scale_h", "1.0" ,
            "--interpolation_scale_w", "1.0" ,
            "--gradient_checkpointing", 
            "--train_batch_size","1", 
            "--dataloader_num_workers", "0" ,
            "--gradient_accumulation_steps","1" ,
            "--max_train_steps","100" ,
            "--learning_rate","1e-5" ,
            "--lr_scheduler","constant" ,
            "--lr_warmup_steps","0" ,
            "--mixed_precision=bf16" ,
            "--report_to=tensorboard" ,
            "--checkpointing_steps=500" ,
            "--allow_tf32", 
            "--model_max_length", "512", 
            "--use_ema" ,
            "--ema_start_step","0", 
            "--cfg"," 0.1" ,
            "--resume_from_checkpoint=latest", 
            "--speed_factor", "1.0", 
            "--ema_decay"," 0.9999" ,
            "--drop_short_ratio","0.0",
            "--pretrained", "" ,
            "--hw_stride", "32", 
            "--sparse1d", "--sparse_n", "4" ,
            "--train_fps", "16" ,
            "--seed", "1234", 
            "--trained_data_global_step","0" ,
            "--group_data", 
            "--use_decord", 
            "--prediction_type", "v_prediction",
            "--snr_gamma", "5.0", 
            "--force_resolution", 
            "--rescale_betas_zero_snr", 
            "--output_dir","/mnt/singularity_home/jsha/repos/Open-Sora-Plan/output",
            // "--sp_size=2", 
LinB203 commented 4 weeks ago

If you want to achieve the best results, I recommend that you train and generate to maintain a consistent resolution.