Is your feature request related to a problem? Please describe.
A major drawback of using single-file models is their inefficient use of disk storage, as a user who has downloaded several models in the single-file format is likely storing many redundant copies of individual model components that were re-used across models.
It's common for models to merge or fine-tune only the UNet, for example, leading to a situation where the user is potentially storing many redundant copies of identical VAEs and text encoders, eating up a non-trivial amount of their disk space.
However, currently even when converting single-file models to the diffusers-multifolder format using the scripts provided in this repository, each model’s components (e.g., UNet, VAE, text encoder) are stored separately, leading still to redundant storage if multiple models share identical components.
Describe the solution you'd like.
I propose a feature that facilitates the conversion of downloaded single-file models to the diffusers-multifolder layout in a storage-efficient manner. The core idea is to identify and eliminate duplicate model components across multiple models by:
Converting models to the diffusers-multifolder format.
Computing cryptographic hashes for the weight files of each model component
Storing each component, named by its hash, in a centralized directory
Linking from each model’s directory to these centralized components instead of storing duplicates OR adding the name/hash/location to the config.json (may be preferable for Windows users since it doesn't require enabling developer mode)
This could be supported by providing additional operations such as:
Perform a dry run to assess potential disk space savings.
Convert models individually in sequence, cleaning up the original model files after each conversion (useful when disk space is already limited), optionally pausing before the cleanup if validating the converted model is needed
Describe alternatives you've considered.
If I understand correctly (please correct me if I'm wrong), the huggingface_hub caching system currently performs some de-duplication, but:
The conversion scripts in this repository don't add models to this cache
The caching system avoids duplicated blobs within the same model namespace, but not across model namespaces
Additional context.
I'd like to work on implementing this feature, and I'm proposing it here first to ensure it fits within the scope of this project, and to refine the proposal further if necessary.
In my opinion, this feature would drive support for and adoption of the diffusers-multifolder layout across the ecosystem.
Is your feature request related to a problem? Please describe. A major drawback of using single-file models is their inefficient use of disk storage, as a user who has downloaded several models in the single-file format is likely storing many redundant copies of individual model components that were re-used across models.
It's common for models to merge or fine-tune only the UNet, for example, leading to a situation where the user is potentially storing many redundant copies of identical VAEs and text encoders, eating up a non-trivial amount of their disk space.
However, currently even when converting single-file models to the diffusers-multifolder format using the scripts provided in this repository, each model’s components (e.g., UNet, VAE, text encoder) are stored separately, leading still to redundant storage if multiple models share identical components.
Describe the solution you'd like. I propose a feature that facilitates the conversion of downloaded single-file models to the diffusers-multifolder layout in a storage-efficient manner. The core idea is to identify and eliminate duplicate model components across multiple models by:
This could be supported by providing additional operations such as:
Describe alternatives you've considered. If I understand correctly (please correct me if I'm wrong), the huggingface_hub caching system currently performs some de-duplication, but:
Additional context.
Thank you for your consideration and feedback!