Closed manchandasahil closed 2 years ago
Follow Up.
When i remove the augmentation option, it gives the following error:
Traceback (most recent call last):
File "speaker_reco_finetune.py", line 94, in
I have optim in my yaml optim: name: sgd lr: .006 #(original titanet-large was trained with 0.08 lr) weight_decay: 0.001
# scheduler setup
sched:
name: CosineAnnealing
warmup_ratio: 0.1
min_lr: 0.0
Hi,
I am facing same issue.
Any update on this.
Please help!
Updating finetune method way of handling with PR: https://github.com/NVIDIA/NeMo/pull/4504
Please pull the changes and let me know if you still face the same issues.
Thanks for the update. It worked for me, not getting the same error.
Describe the bug
Hi, I am trying to follow the steps underlines here: https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/recognition but when i launch : python speaker_reco_finetune.py --pretrained_model='/notebooks/data/atc_tenant/asr/smancha5/Nemo_speaker_training/titanet_l/titanet-l.nemo' --finetune_config_file='titanet-large.yaml'
It gives me this error: [NeMo I 2022-06-21 09:06:08 features:200] PADDING: 16 [NeMo I 2022-06-21 09:06:08 label_models:100] loss is Angular Softmax [NeMo I 2022-06-21 09:06:11 save_restore_connector:243] Model EncDecSpeakerLabelModel was successfully restored from /notebooks/data/atc_tenant/asr/smancha5/Nemo_speaker_training/titanet_l/titanet-l.nemo. [NeMo I 2022-06-21 09:06:11 label_models:370] Setting up data loaders with manifests provided from model_config [NeMo I 2022-06-21 09:06:11 collections:290] Filtered duration for loading collection is 0.000000. [NeMo I 2022-06-21 09:06:11 collections:294] # 16585 files loaded accounting to # 304 labels [NeMo W 2022-06-21 09:06:11 label_models:133] Total number of 304 found in all the manifest files. File "speaker_reco_finetune.py", line 94, in
main()
File "speaker_reco_finetune.py", line 79, in main
speaker_model.setup_finetune_model(finetune_config.model)
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/asr/models/label_models.py", line 373, in setup_finetune_model
self.setup_training_data(model_config.train_ds)
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/asr/models/label_models.py", line 201, in setup_training_data
self._train_dl = self.__setup_dataloader_from_config(config=train_data_layer_config)
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/asr/models/label_models.py", line 138, in setup_dataloader_from_config
augmentor = process_augmentations(config['augmentor'])
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 890, in process_augmentations
augmentation = perturbation_typesaugment_name
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 403, in init
self._manifest = collections.ASRAudioText(manifest_path, parser=parsers.make_parser([]), index_by_file_id=True)
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/common/parts/preprocessing/collections.py", line 214,in init__
for item in manifest.item_iter(manifests_files):
File "/home/smancha5/.local/lib/python3.8/site-packages/nemo/collections/common/parts/preprocessing/manifest.py", line 69, in item_iter
for manifest_file in manifests_files:
TypeError: 'NoneType' object is not iterable
Steps/Code to reproduce bug
python speaker_reco_finetune.py --pretrained_model='/notebooks/data/atc_tenant/asr/smancha5/Nemo_speaker_training/titanet_l/titanet-l.nemo' --finetune_config_file='titanet-large.yaml'
Expected behavior
finetuning of the speaker recog model.
Environment overview (please complete the following information)
docker pull
&docker run
commands usedEnvironment details
nvcr.io/nvidia/pytorch:22.03-py3 If NVIDIA docker image is used you don't need to specify these. nvcr.io/nvidia/pytorch:22.03-py3
Additional context
Add any other context about the problem here. Example: GPU model