tumurzakov / AnimateDiff

AnimationDiff with train
Apache License 2.0
111 stars 28 forks source link

TypeError: '<=' not supported between instances of 'int' and 'NoneType' #8

Open irldoggo opened 1 year ago

irldoggo commented 1 year ago

Edit: I am using the devel branch for reference.

File "../AnimateDiff/animatediff/utils/overlap_policy.py", line 9, in uniform if n <= context_size: TypeError: '<=' not supported between instances of 'int' and 'NoneType'

I am getting this error after the first 100 steps of training, as it's doing it's validation pass. I am assuming I am missing a variable in my training yaml. So this could just be a me problem. If it is, disregard this as an issue. I think this was asked in another issue, but if you could upload your training config file for reference that would be awesome.

Super excited to see where this project goes, thanks for the hard work.

tumurzakov commented 1 year ago

@nderlyn validation_data.temporal_context is missing

pretrained_model_path: /content/animatediff/models/StableDiffusion/
motion_module: /content/animatediff/models/Motion_Module/mm_sd_v14.ckpt
motion_module_pe_multiplier: 1
inference_config_path: /content/drive/MyDrive/AI/video/videos/intro3/infer/valid.yaml
start_global_step: 0
output_dir: /content/drive/MyDrive/AI/video/videos/intro3/train
train_data:
  video_path:
  - /content/drive/MyDrive/AI/video/videos/intro3/dataset/0.mp4
  - /content/drive/MyDrive/AI/video/videos/intro3/dataset/1.mp4
  prompt:
  - fly over mist
  - fly over mist
  n_sample_frames: 24
  width: 480
  height: 272
  sample_start_idx: 0
  sample_frame_rate: 1
validation_data:
  prompts:
  - fly over mist
  - fly over mist
  video_length: 24
  width: 480
  height: 272
  temporal_context: 24
  num_inference_steps: 20
  guidance_scale: 12.5
  use_inv_latent: true
  num_inv_steps: 50
learning_rate: 3.0e-05
train_batch_size: 1
max_train_steps: 1
checkpointing_steps: 100
validation_steps: 10000
trainable_modules:
- to_q
seed: 33
mixed_precision: fp16
use_8bit_adam: false
gradient_checkpointing: true
enable_xformers_memory_efficient_attention: true