Closed MedalCollector closed 7 months ago
pretrained_models下有没有相应模型?下载地址在README就有
而且nltk_data没能连上github也下载失败了
关闭
我也有相同問題
Traceback (most recent call last):
File "C:\Python312\Lib\site-packages\transformers\utils\hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\huggingface_hub\utils\_validators.py", line 111, in _inner_fn
validate_repo_id(arg_value)
File "C:\Python312\Lib\site-packages\huggingface_hub\utils\_validators.py", line 159, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use `repo_type` argument if needed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\user\Downloads\GPT-SoVITS-main\GPT-SoVITS-main\GPT_SoVITS\sovits.py", line 45, in <module>
tokenizer = AutoTokenizer.from_pretrained(bert_path, token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 779, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 612, in get_tokenizer_config
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\transformers\utils\hub.py", line 462, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
我也有相同問題
Traceback (most recent call last): File "C:\Python312\Lib\site-packages\transformers\utils\hub.py", line 398, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\huggingface_hub\utils\_validators.py", line 111, in _inner_fn validate_repo_id(arg_value) File "C:\Python312\Lib\site-packages\huggingface_hub\utils\_validators.py", line 159, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use `repo_type` argument if needed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\user\Downloads\GPT-SoVITS-main\GPT-SoVITS-main\GPT_SoVITS\sovits.py", line 45, in <module> tokenizer = AutoTokenizer.from_pretrained(bert_path, token) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 779, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 612, in get_tokenizer_config resolved_config_file = cached_file( ^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\transformers\utils\hub.py", line 462, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
解决了吗兄弟?我也遇到这个问题了,报错也是一模一样.我还下载了一键包把文件复制过去也不好用
在Windows系统上模型训练环节报了如下错误,请问有人遇到过吗: "D:\program\anaconda\FromC\envs\GPTSoVITS\python.exe" GPT_SoVITS/prepare_datasets/1-get-text.py "D:\program\anaconda\FromC\envs\GPTSoVITS\python.exe" GPT_SoVITS/prepare_datasets/1-get-text.py [nltk_data] Error loading averaged_perceptron_tagger: <urlopen error [nltk_data] [Errno 11004] getaddrinfo failed>[nltk_data] Error loading averaged_perceptron_tagger: <urlopen error [nltk_data] [Errno 11004] getaddrinfo failed>
Traceback (most recent call last): File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file Traceback (most recent call last): File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file resolved_file = hf_hub_download( resolved_file = hf_hub_download( File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\huggingface_hub\utils_validators.py", line 110, in _inner_fn File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\huggingface_hub\utils_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) validate_repo_id(arg_value) File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\huggingface_hub\utils_validators.py", line 158, in validate_repo_id File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\huggingface_hub\utils_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use
repo_type
argument if needed. raise HFValidationError(The above exception was the direct cause of the following exception:
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Use
repo_type
argument if needed.The above exception was the direct cause of the following exception:
Traceback (most recent call last): Traceback (most recent call last): File "D:\program\python\GPT-SoVITS\GPT_SoVITS\prepare_datasets\1-get-text.py", line 56, in
File "D:\program\python\GPT-SoVITS\GPT_SoVITS\prepare_datasets\1-get-text.py", line 56, in
tokenizer = AutoTokenizer.from_pretrained(bert_pretrained_dir)
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer = AutoTokenizer.from_pretrained(bert_pretrained_dir)
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, kwargs)
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, kwargs)
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
resolved_config_file = cached_file(
raise EnvironmentError(
File "D:\program\anaconda\FromC\envs\GPTSoVITS\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
Traceback (most recent call last):
File "D:\program\python\GPT-SoVITS\webui.py", line 555, in open1abc
with open(txt_path, "r",encoding="utf8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'logs/xxx/2-name2text-0.txt'
ERROR: The process "19472" not found.
ERROR: The process "7624" not found.