openvpi / DiffSinger

An advanced singing voice synthesis system with high fidelity, expressiveness, controllability and flexibility based on DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism
Apache License 2.0
2.68k stars 283 forks source link

In automatic optimization, `training_step` must return a Tensor, a dict, or None (where the step will be skipped). #197

Closed c7e715d1b04b17683718fb1e8944cc28 closed 3 months ago

c7e715d1b04b17683718fb1e8944cc28 commented 3 months ago

In the "base_task.py" file in the "basics" folder, in the "training_step" function in the "BaseTask" class, the "total_loss" variable contains an int of 0.

Traceback (most recent call last):
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\DiffSinger\scripts\train.py", line 31, in <module>
    run_task()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\DiffSinger\scripts\train.py", line 27, in run_task
    task_cls.start()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\DiffSinger\basics\base_task.py", line 467, in start
    trainer.fit(task, ckpt_path=get_latest_checkpoint_path(work_dir))
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 545, in fit
    call._call_and_handle_interrupt(
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 581, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 990, in _run
    results = self._run_stage()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1036, in _run_stage
    self.fit_loop.run()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 202, in run
    self.advance()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 359, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 136, in run
    self.advance(data_fetcher)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 240, in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 187, in run
    self._optimizer_step(batch_idx, closure)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 265, in _optimizer_step
    call._call_lightning_module_hook(
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\trainer\call.py", line 157, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\core\module.py", line 1282, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\core\optimizer.py", line 151, in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\strategies\strategy.py", line 230, in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\plugins\precision\amp.py", line 77, in optimizer_step
    closure_result = closure()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 140, in __call__
    self._result = self.closure(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 126, in closure
    step_output = self._step_fn()
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 318, in _training_step
    return self.output_result_cls.from_training_step_output(training_step_output, trainer.accumulate_grad_batches)
  File "D:\Users\ikki1\Desktop\Projects\DiffTrainer\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 72, in from_training_step_output
    raise MisconfigurationException(
lightning.fabric.utilities.exceptions.MisconfigurationException: In automatic optimization, `training_step` must return a Tensor, a dict, or None (where the step will be skipped).
yqzhishen commented 3 months ago

You are not using our repository directly. Please raise your issue to the repository you are using.

c7e715d1b04b17683718fb1e8944cc28 commented 3 months ago

Sorry, but this error also occurs when using the DiffSinger repository directly.

Traceback (most recent call last):
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\scripts\train.py", line 31, in <module>
    run_task()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\scripts\train.py", line 27, in run_task
    task_cls.start()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\basics\base_task.py", line 467, in start
    trainer.fit(task, ckpt_path=get_latest_checkpoint_path(work_dir))
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 989, in _run
    results = self._run_stage()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1035, in _run_stage
    self.fit_loop.run()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 202, in run
    self.advance()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 359, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 136, in run
    self.advance(data_fetcher)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 240, in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 187, in run
    self._optimizer_step(batch_idx, closure)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 265, in _optimizer_step
    call._call_lightning_module_hook(
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\trainer\call.py", line 157, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\core\module.py", line 1291, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\core\optimizer.py", line 151, in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\strategies\strategy.py", line 230, in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\plugins\precision\amp.py", line 77, in optimizer_step
    closure_result = closure()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 140, in __call__
    self._result = self.closure(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 126, in closure
    step_output = self._step_fn()
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 318, in _training_step
    return self.output_result_cls.from_training_step_output(training_step_output, trainer.accumulate_grad_batches)
  File "D:\Users\ikki1\Desktop\Projects\DiffSinger\.venv\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py", line 72, in from_training_step_output
    raise MisconfigurationException(
lightning.fabric.utilities.exceptions.MisconfigurationException: In automatic optimization, `training_step` must return a Tensor, a dict, or None (where the step will be skipped).
yqzhishen commented 3 months ago

Can you upload your config file?

c7e715d1b04b17683718fb1e8944cc28 commented 3 months ago

Okay. from the DiffSinger curriculum. https://openvpi-docs.feishu.cn/wiki/Mt13w6wUaiI0zUkN2wkcwyWBndh

base_config: configs/variance.yaml

raw_data_dir: 
  - data/20240617225409/raw

speakers: 
  - "20240617225409"

spk_ids: []

test_prefixes: 
  - _ああいあうえあ_N_A#2 
  - _ああいあうえあ_N_A#2
  - _ああいあうえあ_N_A3 
  - _ああいあうえあ_N_B2
  - _ああいあうえあ_N_C#3

dictionary: dictionaries/jpn_dict.txt

binary_data_dir: data/20240617225409/binary

binarization_args: 
  num_workers: 0 

use_spk_id: false 

num_spk: 1

predict_dur: false
predict_pitch: false
predict_energy: false
predict_breathiness: false
predict_voicing: false
predict_tension: false

energy_db_min: -96.0
energy_db_max: -12.0

breathiness_db_min: -96.0
breathiness_db_max: -20.0

voicing_db_min: -96.0
voicing_db_max: -12.0

tension_logit_min: -10.0
tension_logit_max: 10.0

hidden_size: 256
dur_prediction_args:
  arch: fs2
  hidden_size: 512
  dropout: 0.1
  num_layers: 5
  kernel_size: 3
  log_offset: 1.0
  loss_type: mse
  lambda_pdur_loss: 0.3
  lambda_wdur_loss: 1.0
  lambda_sdur_loss: 3.0

use_melody_encoder: false
melody_encoder_args:
  hidden_size: 128
  enc_layers: 4
use_glide_embed: false
glide_types: [up, down]
glide_embed_scale: 11.313708498984760

pitch_prediction_args:
  pitd_norm_min: -8.0
  pitd_norm_max: 8.0
  pitd_clip_min: -12.0
  pitd_clip_max: 12.0
  repeat_bins: 64
  residual_layers: 20
  residual_channels: 256
  dilation_cycle_length: 5  # *

variances_prediction_args:
  total_repeat_bins: 48
  residual_layers: 10
  residual_channels: 192
  dilation_cycle_length: 4  # *

lambda_dur_loss: 1.0
lambda_pitch_loss: 1.0
lambda_var_loss: 1.0

optimizer_args: 
  lr: 0.0006 
lr_scheduler_args:
  scheduler_cls: torch.optim.lr_scheduler.StepLR
  step_size: 10000
  gamma: 0.75

max_batch_frames: 80000

max_batch_size: 9

max_updates: 160000

num_valid_plots: 10

val_check_interval: 2000

num_ckpt_keep: 5

permanent_ckpt_start: 80000 

permanent_ckpt_interval: 10000

pl_trainer_devices: 'auto'

pl_trainer_precision: '16-mixed' 
yqzhishen commented 3 months ago

Please at least turn on one predict_* switch. Otherwise the model has nothing to train.

c7e715d1b04b17683718fb1e8944cc28 commented 3 months ago

Thank you! You've been very helpful. We apologize for the inconvenience.

sp1cae commented 1 month ago

Please at least turn on one predict_* switch. Otherwise the model has nothing to train.

你的意思是 predict_dur: false predict_pitch: false 这两个至少要打开一个吗 我的是这样 predict_dur: false predict_pitch: false predict_energy: ture predict_breathiness: ture predict_voicing: ture predict_tension: false 报错了