Closed ines-gpt closed 2 days ago
非常抱歉,文档此处有些小问题,--checkpoint-path应该为 "checkpoints/fish-speech-1.2/model.pth" Sorry for the bug in docs, the args --checkpoint-path should be "checkpoints/fish-speech-1.2/model.pth"
Thank you very much for your quickly reply, just now
We are trying our best to fix the bugs, if you want to use the inferrence part, try use the command below: activate your virtual environment cd /your_path_to_the_object/fish-speech python tools/webui.py
thank you very much! I found that the parameter[heckpoint-path "checkpoints/model.pth" should be folder [checkpoints], then source will load the config.json and model.pth,
excute the command again, and when loading mode.pth, the flow error ocurred.
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/tools/llama/generate.py", line 656, in main
model, decode_one_token = load_model(
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/tools/llama/generate.py", line 340, in load_model
model: Union[NaiveTransformer, DualARTransformer] = BaseTransformer.from_pretrained(
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/fish_speech/models/text2semantic/llama.py", line 341, in from_pretrained
tokenizer = AutoTokenizer.from_pretrained(str(path))
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 846, in from_pretrained
config = AutoConfig.from_pretrained(
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 984, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type
dual_arbut Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
The bugs about path has been fixed in the pr, you can wait for merge and then pull the main branch again :) Also remember to check the version of your Transformers. Try: pip install --upgrade transformers If the problem still exists, we suggest you to wait for the pull requests to be merged.
from [https://huggingface.co/fishaudio/fish-speech-1.2/tree/main], download all the file ,and it become good, execute successfully. thank you
resolve the problem, and close it.
when excute the command python tools/llama/generate.py \ --text "要转换的文本" \ --prompt-text "你的参考文本" \ --prompt-tokens "fake.npy" \ --checkpoint-path "checkpoints/text2semantic-sft-medium-v1.1-4k.pth" \ --num-samples 2 \ --compile the following error happened
**checkpoints/text2semantic-sft-medium-v1.1-4k.pth Traceback (most recent call last): File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/tools/llama/generate.py", line 713, in
main()
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/click/core.py", line 1157, in call
return self.main(args, kwargs)
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, ctx.params)
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(args, **kwargs)
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/tools/llama/generate.py", line 660, in main
model, decode_one_token = load_model(
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/tools/llama/generate.py", line 344, in load_model
model: Union[NaiveTransformer, DualARTransformer] = BaseTransformer.from_pretrained(
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/fish_speech/models/text2semantic/llama.py", line 325, in from_pretrained
config = BaseModelArgs.from_pretrained(path)
File "/home/ysl/00_work/01_ines/01_rsd/02_fish_speech/fish_speech/models/text2semantic/llama.py", line 77, in from_pretrained
data = json.load(f)
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/json/init.py", line 293, in load
return loads(fp.read(),
File "/home/ysl/miniconda3/envs/fish-speech/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte