Closed Maarten-buelens closed 1 year ago
Any solution for this?
No not yet.
Same problem when using the docker images. CPU and GPU.
same error seems to happen when using the command line utility (tts)
tts --text "hello world." --out_path audio/hello.wav
Same error
Same error. It stopped working over night.
Here is my workaround for the docker usage. Seems their proxy urls in .model.json
are broken.
If you've attempted to download any broken model links you probably need to clear them from /root/.local/share/tts/
Replace the proxy url's with github release links:
cp /root/TTS/.models.json /root/TTS/.models.json.bak
sed -i 's|https://coqui.gateway.scarf.sh|https://github.com/coqui-ai/TTS/releases/download|g' /root/TTS/.models.json
tts-server --model_name tts_models/en/vctk/vits
> Downloading model to /root/.local/share/tts/tts_models--en--vctk--vits
> Model's license - apache 2.0
> Check https://choosealicense.com/licenses/apache-2.0/ for more info.
> Using model: vits
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:0
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:None
| > fft_size:1024
| > power:None
| > preemphasis:0.0
| > griffin_lim_iters:None
| > signal_norm:None
| > symmetric_norm:None
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:None
| > pitch_fmax:None
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:1.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
> initialization of speaker-embedding layers.
* Serving Flask app 'TTS.server.server'
* Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (::)
* Running on http://[::1]:5002
* Running on http://[::1]:5002
INFO:werkzeug:Press CTRL+C to quit
Curious of the need to retarget these model downloads away from github.
Just following so I know when this is fixed.
same here
I am also facing the same since yesterday, please provide a fix asap
Same for me since yesterday. Following to see when fixed...
I just saw his tweet about "GitHub changed the max artifact size in releases and made it 25MB" This could be why it's not reachable anymore
in the TTS/models.json file in the repo the model links work fine (you can download them). but even if i download them manually i can't get it to work. anyone try this ?
Since the link for the model download is back I got it to work again by simply deleting the directory containing the corrupted zip files. In my case: rm -rf ~/.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC
in the TTS/models.json file in the repo the model links work fine (you can download them). but even if i download them manually i can't get it to work. anyone try this ?
If you manually download the zip you need to unpack it to the ~/.local/share/tts/{model name}
directory or you could try the automatic download again.
Sorry, guys, we had a problem with our proxy. Now all is good.
Error:
raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file
I think the following is the root cause of this problemοΌ
It is indeed a network issue that caused the download of resources to fail or be incomplete.
Reason:
This error is likely caused by the absence of the averaged_perceptron_tagger
package in NLTK, which is used for part-of-speech tagging. It contains a part-of-speech tagger based on the averaged perceptron algorithm. If you are using this tagger in your code but haven't downloaded the corresponding data package beforehand, you will encounter an error indicating that the averaged_perceptron_tagger.zip
file is missing. It is also possible that you are missing the cmudict
CMU pronunciation dictionary data package file.
Normally, when you run the program for the first time, NLTK automatically downloads the relevant data packages it needs. In debug mode, you should see the following information:
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package cmudict to /root/nltk_data...
[nltk_data] Unzipping corpora/cmudict.zip.
However, due to reasons such as network issues , the automatic download might fail, resulting in missing files and causing the loading error.
Solution: Redownload the missing data package files.
Method 1:
Create a download.py
file and write the following code in it:
import nltk
print(nltk.data.path)
nltk.download('averaged_perceptron_tagger')
nltk.download('cmudict')
Save and run the file:
python download.py
This will display the file index location and automatically download the missing averaged_perceptron_tagger.zip
and cmudict.zip
files to a subdirectory under the /root/nltk_data
directory. After the download is complete, check if you have an nltk_data
folder in the root directory and extract the contents of the downloaded zip files there.
. Method 2:
If the above code still fails to download the data packages, you can manually search and download the compressed zip files by opening the following address :
https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
The download links for the averaged_perceptron_tagger.zip
and cmudict.zip
data packages are:
https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/taggers/averaged_perceptron_tagger.zip https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/cmudict.zip
Then, upload the downloaded zip files to the file index location that was printed when running python download.py
, such as /root/nltk_data
or /root/miniconda3/envs/EmotiVoice/nltk_data
. If the directory doesn't exist, create one and extract the zip files there.
After extraction, the directory structure of nltk_data
should look like this:
βββ nltk_data
β βββ corpora
β β βββ cmudict
β β β βββ README
β β β βββ cmudict
β β βββ cmudict.zip
β βββ taggers
β βββ averaged_perceptron_tagger
β β βββ averaged_perceptron_tagger.pickle
β βββ averaged_perceptron_tagger.zip
Reference souceοΌ θΏθ‘ζ₯ι raise BadZipFile("File is not a zip file")
Sorry, guys, we had a problem with our proxy. Now all is good.
Still same trouble. Did you push the branch to main?
I had to disable my pihole to get it to work.
no simply just clicking on the link make it work for me
I had to disable my pihole to get it to work.
I am so glad I stumbled across this comment. I forgot I even had that thing running.
Describe the bug
When I run the example the model is unable to download.
when I try to manually download the model from
I get redirected to
which says "Repository not found"
To Reproduce
Run the basic example for python
code:
output:
Environment
Additional context
No response