hitachi-speech / EEND

End-to-End Neural Diarization
MIT License
360 stars 57 forks source link

Run log is empty #46

Open maerduduqi opened 2 months ago

maerduduqi commented 2 months ago

train.py -c conf/train.yaml data/simu/data/train_clean_5_ns2_beta2_500 data/simu/data/dev_clean_2_ns2_beta2_500 exp/diarize/model/train_clean_5_ns2_beta2_500.dev_clean_2_ns2_beta2_500.train

Started at 2024年 04月 22日 星期一 17:20:17 CST

# python version: 3.7.16 (default, Jan 17 2023, 22:20:44) [GCC 11.2.0] chainer version: 7.8.0 cupy version: 7.7.0 cuda version: 10010 cudnn version: 7605 namespace(attractor_decoder_dropout=0.1, attractor_encoder_dropout=0.1, attractor_loss_ratio=1.0, backend='chainer', batchsize=8, config=[<yamlargparse.Path object at 0x7fef1ab59c10>], context_size=7, dc_loss_ratio=0.5, embedding_layers=2, embedding_size=256, frame_shift=80, frame_size=200, gpu=0, gradclip=5, gradient_accumulation_steps=1, hidden_size=256, initmodel='', input_transform='logmel23_mn', label_delay=0, lr=0.001, max_epochs=10, model_save_dir='exp/diarize/model/train_clean_5_ns2_beta2_500.dev_clean_2_ns2_beta2_500.train', model_type='Transformer', noam_scale=1.0, noam_warmup_steps=25000.0, num_frames=500, num_lstm_layers=1, num_speakers=2, optimizer='noam', resume='', sampling_rate=8000, seed=777, shuffle=False, subsampling=10, train_data_dir='data/simu/data/train_clean_5_ns2_beta2_500', transformer_encoder_dropout=0.1, transformer_encoder_n_heads=4, transformer_encoder_n_layers=2, use_attractor=False, valid_data_dir='data/simu/data/dev_clean_2_ns2_beta2_500') 2730 chunks 1863 chunks GPU device 0 is used Prepared model

maerduduqi commented 2 months ago

报错