suno-ai / bark

🔊 Text-Prompted Generative Audio Model
MIT License
35.98k stars 4.24k forks source link

Cannot load models using preload_models (OSError: Read-only file system) #317

Open zmactep opened 1 year ago

zmactep commented 1 year ago

TL;DR I get OSError from tempfile when I try to load models.

I have a fresh conda environment on my M1 Max MacBook with latests pytorch2, huggingface_hub and other packages installed. Bark is installed using pip install git+https://github.com/suno-ai/bark.git.

I am trying to download models using preload_models() from the example code, but I am getting such error:

In [1]: from bark import SAMPLE_RATE, generate_audio, preload_models
   ...: from scipy.io.wavfile import write as write_wav
   ...: from IPython.display import Audio

In [2]: # download and load all models
   ...: preload_models()
No GPU being used. Careful, inference might be very slow!
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
Cell In[2], line 2
      1 # download and load all models
----> 2 preload_models()

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/bark/generation.py:318, in preload_models(text_use_gpu, text_use_small, coarse_use_gpu, coarse_use_small, fine_use_gpu, fine_use_small, codec_use_gpu, force_reload)
    314 if _grab_best_device() == "cpu" and (
    315     text_use_gpu or coarse_use_gpu or fine_use_gpu or codec_use_gpu
    316 ):
    317     logger.warning("No GPU being used. Careful, inference might be very slow!")
--> 318 _ = load_model(
    319     model_type="text", use_gpu=text_use_gpu, use_small=text_use_small, force_reload=force_reload
    320 )
    321 _ = load_model(
    322     model_type="coarse",
    323     use_gpu=coarse_use_gpu,
    324     use_small=coarse_use_small,
    325     force_reload=force_reload,
    326 )
    327 _ = load_model(
    328     model_type="fine", use_gpu=fine_use_gpu, use_small=fine_use_small, force_reload=force_reload
    329 )

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/bark/generation.py:275, in load_model(use_gpu, use_small, force_reload, model_type)
    273     ckpt_path = _get_ckpt_path(model_type, use_small=use_small)
    274     clean_models(model_key=model_key)
--> 275     model = _load_model_f(ckpt_path, device)
    276     models[model_key] = model
    277 if model_type == "text":

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/bark/generation.py:211, in _load_model(ckpt_path, device, use_small, model_type)
    209 if not os.path.exists(ckpt_path):
    210     logger.info(f"{model_type} model not found, downloading into `{CACHE_DIR}`.")
--> 211     _download(model_info["repo_id"], model_info["file_name"])
    212 checkpoint = torch.load(ckpt_path, map_location=device)
    213 # this is a hack

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/bark/generation.py:151, in _download(from_hf_path, file_name)
    149 def _download(from_hf_path, file_name):
    150     os.makedirs(CACHE_DIR, exist_ok=True)
--> 151     hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR)

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    117 if check_use_auth_token:
    118     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/file_download.py:1318, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, local_dir, local_dir_use_symlinks, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, token, local_files_only, legacy_cache_layout)
   1315 if os.path.exists(blob_path) and not force_download:
   1316     # we have the blob already, but not the pointer
   1317     if local_dir is not None:  # to local dir
-> 1318         return _to_local_dir(blob_path, local_dir, relative_filename, use_symlinks=local_dir_use_symlinks)
   1319     else:  # or in snapshot cache
   1320         _create_symlink(blob_path, pointer_path, new_blob=False)

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/file_download.py:1628, in _to_local_dir(path, local_dir, relative_filename, use_symlinks)
   1625     use_symlinks = os.stat(real_blob_path).st_size > constants.HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD
   1627 if use_symlinks:
-> 1628     _create_symlink(real_blob_path, local_dir_filepath, new_blob=False)
   1629 else:
   1630     shutil.copyfile(real_blob_path, local_dir_filepath)

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/file_download.py:897, in _create_symlink(src, dst, new_blob)
    895 try:
    896     commonpath = os.path.commonpath([abs_src, abs_dst])
--> 897     _support_symlinks = are_symlinks_supported(os.path.dirname(commonpath))
    898 except ValueError:
    899     # Raised if src and dst are not on the same volume. Symlinks will still work on Linux/Macos.
    900     # See https://docs.python.org/3/library/os.path.html#os.path.commonpath
    901     _support_symlinks = os.name != "nt"

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/file_download.py:101, in are_symlinks_supported(cache_dir)
     98 _are_symlinks_supported_in_dir[cache_dir] = True
    100 os.makedirs(cache_dir, exist_ok=True)
--> 101 with SoftTemporaryDirectory(dir=cache_dir) as tmpdir:
    102     src_path = Path(tmpdir) / "dummy_file_src"
    103     src_path.touch()

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/contextlib.py:135, in _GeneratorContextManager.__enter__(self)
    133 del self.args, self.kwds, self.func
    134 try:
--> 135     return next(self.gen)
    136 except StopIteration:
    137     raise RuntimeError("generator didn't yield") from None

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/site-packages/huggingface_hub/utils/_fixes.py:54, in SoftTemporaryDirectory(suffix, prefix, dir, **kwargs)
     37 @contextlib.contextmanager
     38 def SoftTemporaryDirectory(
     39     suffix: Optional[str] = None,
   (...)
     42     **kwargs,
     43 ) -> Generator[str, None, None]:
     44     """
     45     Context manager to create a temporary directory and safely delete it.
     46 
   (...)
     52     See https://www.scivision.dev/python-tempfile-permission-error-windows/.
     53     """
---> 54     tmpdir = tempfile.TemporaryDirectory(prefix=prefix, suffix=suffix, dir=dir, **kwargs)
     55     yield tmpdir.name
     57     try:
     58         # First once with normal cleanup

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/tempfile.py:819, in TemporaryDirectory.__init__(self, suffix, prefix, dir, ignore_cleanup_errors)
    817 def __init__(self, suffix=None, prefix=None, dir=None,
    818              ignore_cleanup_errors=False):
--> 819     self.name = mkdtemp(suffix, prefix, dir)
    820     self._ignore_cleanup_errors = ignore_cleanup_errors
    821     self._finalizer = _weakref.finalize(
    822         self, self._cleanup, self.name,
    823         warn_message="Implicitly cleaning up {!r}".format(self),
    824         ignore_errors=self._ignore_cleanup_errors)

File /opt/homebrew/Caskroom/miniconda/base/envs/voice/lib/python3.10/tempfile.py:368, in mkdtemp(suffix, prefix, dir)
    366 _sys.audit("tempfile.mkdtemp", file)
    367 try:
--> 368     _os.mkdir(file, 0o700)
    369 except FileExistsError:
    370     continue    # try again

OSError: [Errno 30] Read-only file system: '/tmp_sl5pio3'
tongbaojia commented 1 year ago

You can try to download the .pt files from huggingface directly: https://huggingface.co/suno/bark/tree/main and put them into the models folder.

siva010928 commented 1 year ago

@tongbaojia can you explain clearly?

tongbaojia commented 1 year ago

You can download follow the above link.

The models should be put at: ~/.cache/suno/bark_v0 (at least for me). You can see the model path discussion in #197

siva010928 commented 1 year ago

default_cache_dir = os.path.join(os.path.expanduser("~"), ".cache") By default, os.path.expanduser("~") expands to the current user's home directory. So it's not storing in VRAM(GPU)? its storing on our local storage which is user/.cache/suno/bark_v0.

tongbaojia commented 1 year ago

No, it is always saved in your local -- the code will do the to("GPU") to utilize the GPUs.

siva010928 commented 1 year ago

Thanks @tongbaojia I was able to create API integrations and DockerAPI for this model my repo