I've had this error on macOS and fixed it in this pr:
Downloading: "https:/dl.fbaipublicfiles.com/encodec/v0/encodec_24khz-d7cc33bc.th" to /Users/vietanhdev/.cache/torch/hub/checkpoints/encodec_24khz-d7cc33bc.th
Traceback (most recent call last):
File "download_weights.py", line 41, in <module>
state_dict = torch.hub.load_state_dict_from_url(
File "/opt/homebrew/Caskroom/miniforge/base/envs/bark/lib/python3.8/site-packages/torch/hub.py", line 746, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File "/opt/homebrew/Caskroom/miniforge/base/envs/bark/lib/python3.8/site-packages/torch/hub.py", line 611, in download_url_to_file
u = urlopen(req)
File "/opt/homebrew/Caskroom/miniforge/base/envs/bark/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/opt/homebrew/Caskroom/miniforge/base/envs/bark/lib/python3.8/urllib/request.py", line 522, in open
req = meth(req)
File "/opt/homebrew/Caskroom/miniforge/base/envs/bark/lib/python3.8/urllib/request.py", line 1278, in do_request_
raise URLError('no host given')
urllib.error.URLError: <urlopen error no host given>
For the running model conversion, I'm still missing vocab.txt and need to comment following part to convert the model successfully:
Hi @vietanhdev ! Thanks for the PR! FYI, I've just pushed vocab.txt in ggml_weights. This is the vocab file from the bert-multilingual-cased tokenizer on HuggingFace.
I've had this error on macOS and fixed it in this pr:
For the running model conversion, I'm still missing
vocab.txt
and need to comment following part to convert the model successfully: