Open BecarefulW opened 4 days ago
You could probably do something similar like my modifications here: https://github.com/JarodMica/F5-TTS/commit/760a0ee6662ab0f8c94b7e44ba839c4681fd72ee#diff-e04b7ac8329048ac5038a06eb8d8db889aebb858e399eaa1eb3690739064d1afR66
I haven't had time to formalize a PR to the main repo
so you could just pass in you local path and fixed here https://github.com/SWivid/F5-TTS/blob/8f65f9f3e4d47611f2c0628cb3732b1b1ba3fd0b/src/f5_tts/infer/utils_infer.py#L218
I tried to set it as os.environ["TRANSFORMERS_CACHE"] = ".model" in the constants.py of huggingface hub, but it seems that the main model is still downloaded to the original cache directory. Since there are four models, it seems easy to make mistakes when modifying them correspondingly. Is there a way to set the cache location to the project directory or specify a folder for the project to read models?
Checks
Question details
The automatically downloaded models are in the cache folder, which is not conducive to file management. May I ask how to modify the loading location of the model?