aidatatang_200zh: 100%|████████████████████████████████████████████████████████| 420/420 [10:32<00:00, 1.51s/speakers]
The dataset consists of 164904 utterances, 29451936 mel frames, 7510650407 audio timesteps (130.39 hours).
Max input length (text chars): 211
Max mel frames length: 874
Max audio timesteps length: 223606
Embedding: 0%| | 0/164904 [00:00<?, ?utterances/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Embedding: 0%| | 0/164904 [00:07<?, ?utterances/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "C:\Users\21922\Desktop\MockingBird-main\models\synthesizer\preprocess.py", line 104, in embed_utterance
encoder.load_model(encoder_model_fpath)
File "C:\Users\21922\Desktop\MockingBird-main\models\encoder\inference.py", line 33, in load_model
checkpoint = torch.load(weights_fpath, _device)
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 251, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'data\ckpt\encoder\pretrained.pt'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\21922\Desktop\MockingBird-main\pre.py", line 74, in
create_embeddings(synthesizer_root=args.out_dir, n_processes=n_processes_embed, encoder_model_fpath=encoder_model_fpath)
File "C:\Users\21922\Desktop\MockingBird-main\models\synthesizer\preprocess.py", line 130, in create_embeddings
list(tqdm(job, "Embedding", len(fpaths), unit="utterances"))
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
raise value
FileNotFoundError: [Errno 2] No such file or directory: 'data\ckpt\encoder\pretrained.pt'
aidatatang_200zh: 100%|████████████████████████████████████████████████████████| 420/420 [10:32<00:00, 1.51s/speakers] The dataset consists of 164904 utterances, 29451936 mel frames, 7510650407 audio timesteps (130.39 hours). Max input length (text chars): 211 Max mel frames length: 874 Max audio timesteps length: 223606 Embedding: 0%| | 0/164904 [00:00<?, ?utterances/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Embedding: 0%| | 0/164904 [00:07<?, ?utterances/s] multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "C:\Users\21922\Desktop\MockingBird-main\models\synthesizer\preprocess.py", line 104, in embed_utterance encoder.load_model(encoder_model_fpath) File "C:\Users\21922\Desktop\MockingBird-main\models\encoder\inference.py", line 33, in load_model checkpoint = torch.load(weights_fpath, _device) File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 251, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'data\ckpt\encoder\pretrained.pt' """
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "C:\Users\21922\Desktop\MockingBird-main\pre.py", line 74, in
create_embeddings(synthesizer_root=args.out_dir, n_processes=n_processes_embed, encoder_model_fpath=encoder_model_fpath)
File "C:\Users\21922\Desktop\MockingBird-main\models\synthesizer\preprocess.py", line 130, in create_embeddings
list(tqdm(job, "Embedding", len(fpaths), unit="utterances"))
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "C:\Users\21922\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
raise value
FileNotFoundError: [Errno 2] No such file or directory: 'data\ckpt\encoder\pretrained.pt'