Open gedageda opened 1 week ago
第二个使用开发者选项消失了,第一个怎么处理?最主要的是为什么我电脑只有cpu运行,我的cuda12的不行么?
警告而已不是错误,忽略即可
1.自己部署的时候出现下面的错误: [Errno 2] No such file or directory: 'data/Emilia_ZH_EN_pinyin/vocab.txt' Traceback (most recent call last): File "D:\TTS-noModels\f5-tts-api\webui.py", line 193, in generate_audio loaded_models[model_choice] = load_model( File "D:\TTS-noModels\f5-tts-api\webui.py", line 83, in load_model vocab_char_map, vocab_size = get_tokenizer("Emilia_ZH_EN", "pinyin") File "D:\TTS-noModels\f5-tts-api\model\utils.py", line 138, in get_tokenizer with open (f"data/{datasetname}{tokenizer}/vocab.txt", "r", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'data/Emilia_ZH_EN_pinyin/vocab.txt' 解决办法:拷贝官方开源部署目录下面data/Emilia_ZH_EN_pinyin目录到data文件夹,此文件夹部署初始是空的 2.GPU使用需要参考下ISSUES里面的问题,不再表述 3.使用webui的时候,生成的音频在开始的时候有杂音,暂时未找到处理办法............ 4.使用webui的时候,出现下列错误:暂时未处理 ERROR:main:Error generating audio: Argument must be an image or collection in this Axes Traceback (most recent call last): File "D:\TTS-noModels\f5-tts-api\webui.py", line 213, in generateaudio audio, = infer(audio_path, ref_text, prompt, model, False) File "D:\TTS-noModels\f5-tts-api\webui.py", line 461, in infer return infer_batch((audio, sr) if ref_audio else (None, None), ref_text, gen_text_batches, model, remove_silence) File "D:\TTS-noModels\f5-tts-api\webui.py", line 405, in infer_batch save_spectrogram(combined_spectrogram, spectrogram_path) File "D:\TTS-noModels\f5-tts-api\model\utils.py", line 198, in save_spectrogram plt.imshow(spectrogram, origin='lower', aspect='auto') File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\pyplot.py", line 3581, in imshow sci(ret) File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\pyplot.py", line 4332, in sci gca()._sci(im) File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\axes_base.py", line 2221, in _sci raise ValueError("Argument must be an image or collection in this Axes") ValueError: Argument must be an image or collection in this Axes ERROR:main:Error generating audio: Argument must be an image or collection in this Axes Traceback (most recent call last): File "D:\TTS-noModels\f5-tts-api\webui.py", line 213, in generateaudio audio, = infer(audio_path, ref_text, prompt, model, False) File "D:\TTS-noModels\f5-tts-api\webui.py", line 461, in infer return infer_batch((audio, sr) if ref_audio else (None, None), ref_text, gen_text_batches, model, remove_silence) File "D:\TTS-noModels\f5-tts-api\webui.py", line 405, in infer_batch save_spectrogram(combined_spectrogram, spectrogram_path) File "D:\TTS-noModels\f5-tts-api\model\utils.py", line 198, in save_spectrogram plt.imshow(spectrogram, origin='lower', aspect='auto') File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\pyplot.py", line 3581, in imshow sci(ret) File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\pyplot.py", line 4332, in sci gca()._sci(im) File "D:\ANAconda\envs\f5-tts-api\lib\site-packages\matplotlib\axes_base.py", line 2221, in _sci raise ValueError("Argument must be an image or collection in this Axes") ValueError: Argument must be an image or collection in this Axes
测试了10个视频进行字幕生成、翻译、配音时出现如下错误后F5停止工作: WARNING:waitress.queue:Task queue depth is 1 WARNING:waitress.queue:Task queue depth is 2 WARNING:waitress.queue:Task queue depth is 3 WARNING:waitress.queue:Task queue depth is 4 WARNING:waitress.queue:Task queue depth is 5 WARNING:waitress.queue:Task queue depth is 6 WARNING:waitress.queue:Task queue depth is 7 WARNING:waitress.queue:Task queue depth is 8
1.FutureWarning: You are using
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in D:\F5-TTS\modelscache\hub\models--Systran--faster-whisper-large-v3. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting theHF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development