Rongjiehuang / Multi-Singer

PyTorch Implementation of Multi-Singer (ACM-MM'21)
MIT License
138 stars 21 forks source link

Execute processing,Error on Windows 11 Exception: Model was not loaded. Call load_model() before inference. #11

Closed Chopin68 closed 1 year ago

Chopin68 commented 2 years ago

multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "Z:\Ai\envs\Tacotron2\lib\multiprocessing\pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "Z:\deeplearing_project\Multi-Singer-main\preprocess.py", line 52, in extract_feats embed = encoder.embed_utterance_torch_preprocess(preprocessed_wav) File "Z:\deeplearing_project\Multi-Singer-main\encoder\inference.py", line 186, in embed_utterance_torch_preprocess partial_embeds = embed_frames_batch_torch(frames_batch) # (batch, n_embeddings(256)) File "Z:\deeplearing_project\Multi-Singer-main\encoder\inference.py", line 60, in embed_frames_batch_torch raise Exception("Model was not loaded. Call load_model() before inference.") Exception: Model was not loaded. Call load_model() before inference. """

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "preprocess.py", line 202, in main() File "preprocess.py", line 194, in main values.append(future.get()) File "Z:\Ai\envs\Tacotron2\lib\multiprocessing\pool.py", line 657, in get raise self._value Exception: Model was not loaded. Call load_model() before inference.

Must I use Linux?