PaddlePaddle / PaddleSpeech

Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
https://paddlespeech.readthedocs.io
Apache License 2.0
11k stars 1.83k forks source link

The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect. #2379

Closed michelleqyhqyh closed 2 years ago

michelleqyhqyh commented 2 years ago

(cuda1100) [qyh@localhost tts3]$ CUDA_VISIBLE_DEVICES='4' ./local/inference.sh ./train [nltk_data] Error loading averaged_perceptron_tagger: <urlopen error [nltk_data] [Errno 111] Connection refused> [nltk_data] Error loading cmudict: <urlopen error [Errno 111] [nltk_data] Connection refused> /home/qyh/anaconda3/envs/cuda1100/lib/python3.7/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: np.complex is a deprecated alias for the builtin complex. To silence this warning, use complex by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.complex128 here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations dtype=np.complex, [2022-09-14 11:08:36,615] [ INFO] - Already cached /home/qyh/.paddlenlp/models/bert-base-chinese/bert-base-chinese-vocab.txt W0914 11:08:40.740352 1356097 analysis_predictor.cc:1118] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect. Traceback (most recent call last): File "/resources/qyh/TTS/PaddleSpeech-develop/paddlespeech/t2s/exps/fastspeech2/../inference.py", line 198, in main() File "/resources/qyh/TTS/PaddleSpeech-develop/paddlespeech/t2s/exps/fastspeech2/../inference.py", line 127, in main device=args.device) File "/resources/qyh/TTS/PaddleSpeech-develop/paddlespeech/t2s/exps/syn_utils.py", line 366, in get_predictor predictor = inference.create_predictor(config) RuntimeError: (NotFound) Cannot open file train/inference/fastspeech2_csmsc.pdmodel, please confirm whether the file is normal. [Hint: Expected static_cast(fin.is_open()) == true, but received static_cast(fin.is_open()):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_predictor.cc:1500)

I use PaddleSpeech-develop/examples/csmsc/tts3. How can I solve this issure?

yt605155624 commented 2 years ago

you should run ./local/synthesize_e2e.sh first to generate static models which are the input of ./local/inference.sh, or download static models we released and move them into the right dir

michelleqyhqyh commented 2 years ago

thanks!

---- Replied Message ---- | From | @.> | | Date | 09/14/2022 13:01 | | To | @.> | | Cc | @.**@.> | | Subject | Re: [PaddlePaddle/PaddleSpeech] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect. (Issue #2379) |

you should run ./local/synthesize_e2e.sh first to generate static models

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

michelleqyhqyh commented 2 years ago

you should run ./local/synthesize_e2e.sh first to generate static models which are the input of ./local/inference.sh, or download static models we released and move them into the right dir

--- Running analysis [ir_graph_to_program_pass] I0914 14:50:58.081429 3833755 analysis_predictor.cc:1035] ======= optimize end ======= I0914 14:50:58.104908 3833755 naive_executor.cc:102] --- skip [feed], feed -> logmel I0914 14:50:58.121094 3833755 naive_executor.cc:102] --- skip [transpose_104.tmp_0], fetch -> fetch Building prefix dict from the default dictionary ... [2022-09-14 14:50:58] [DEBUG] [init.py:113] Building prefix dict from the default dictionary ... Loading model from cache /tmp/jieba.cache [2022-09-14 14:50:58] [DEBUG] [init.py:133] Loading model from cache /tmp/jieba.cache Dumping model to file cache /tmp/jieba.cache [2022-09-14 14:50:58] [DEBUG] [init.py:147] Dumping model to file cache /tmp/jieba.cache Dump cache file failed. Traceback (most recent call last): File "/home/qyh/anaconda3/envs/cuda1100/lib/python3.7/site-packages/jieba/init.py", line 154, in initialize _replace_file(fpath, cache_file) PermissionError: [Errno 1] Operation not permitted: '/tmp/tmp5tskku9m' -> '/tmp/jieba.cache' [2022-09-14 14:50:58] [ERROR] [init.py:156] Dump cache file failed. Traceback (most recent call last): File "/home/qyh/anaconda3/envs/cuda1100/lib/python3.7/site-packages/jieba/init.py", line 154, in initialize _replace_file(fpath, cache_file) PermissionError: [Errno 1] Operation not permitted: '/tmp/tmp5tskku9m' -> '/tmp/jieba.cache' Loading model cost 0.832 seconds. [2022-09-14 14:50:58] [DEBUG] [init.py:165] Loading model cost 0.832 seconds. Prefix dict has been built successfully. [2022-09-14 14:50:58] [DEBUG] [init.py:166] Prefix dict has been built successfully. W0914 14:50:59.140672 3833755 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.4, Runtime API Version: 10.2 W0914 14:50:59.156149 3833755 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2. ./local/inference.sh: line 17: 3833755 Segmentation fault (core dumped) python3 ${BIN_DIR}/../inference.py --inference_dir=${train_output_path}/inference --am=fastspeech2_csmsc --voc=pwgan_csmsc --text=${BIN_DIR}/../sentences.txt --output_dir=${train_output_path}/pd_infer_out --phones_dict=dump/phone_id_map.txt

I run ./local/synthesize_e2e.sh , then I run: CUDA_VISIBLE_DEVICES='4' ./local/inference.sh ./train I have this question. I don't know how to make the Permission pass

yt605155624 commented 2 years ago

I haven't met this problem, maybe try sudo or check the Permission of local/inference.sh