aws-neuron / transformers-neuronx

Apache License 2.0
88 stars 25 forks source link

Avoid splitting Hugging Face Hub checkpoint files on disk #57

Closed dacorvo closed 1 month ago

dacorvo commented 8 months ago

In the curent version, transformers-neuronx models can only be instantiated from a directory where the Hugging Face checkpoint has been split into multiple files.

This raises two major issues:

aws-rhsoln commented 8 months ago

Hello, These are all good points and we have previously run into the exact issues you described. The initial API was intended to avoid out-of-memory issues we had been seeing with extremely large models. We intend to provide improved APIs in a future release (such as supporting the original huggingface checkpoints directly).

harpone commented 7 months ago

Is there a workaround for this? Trying to push_to_hub => getting rate limited (see https://github.com/huggingface/optimum-neuron/issues/358).

Even just waiting and trying again later doesn't seem to work, all files seem to be uploaded again with new commits. Of course, it would be nice on HF side to e.g. just do one commit instead of a gazillion but can't do much about it here...

harpone commented 7 months ago

In case anyone is wondering same as me above, here's a single commit alternative to upload the files:

from huggingface_hub import HfApi, HfFolder, snapshot_download

huggingface_token = HfFolder.get_token()

api = HfApi()

api.upload_folder(repo_id='my_repo_id',
                  folder_path="path_to_files",
                  token=huggingface_token,
                  multi_commits=False)
dacorvo commented 5 months ago

Are there any update on this issue ? In optimum-neuron, we now fetch and split the checkpoint on-demand, which removed the quota error.

However, the disk usage issue still remains, and is made even worse by the fact that the split weights are stored with full precision.

This means that models like Llama-70b require a humongous amount of disk just to be instantiated.

This is what the model should weight:

$ du -h ~/.cache/huggingface/hub/models--meta-llama--Llama-2-70b-hf/blobs/
129G    /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Llama-2-70b-hf/blobs/

This is the extra weights induced by transformers_neuronx weight splitting:

$ du -h ./data/2.16.1/llama-2-70b-hf-1x2048x24/checkpoint/pytorch_model.bin/
257G    ./data/2.16.1/llama-2-70b-hf-1x2048x24/checkpoint/pytorch_model.bin/
dacorvo commented 5 months ago

Another example of the disk usage issue here: https://huggingface.co/aws-neuron/optimum-neuron-cache/discussions/2#65c164df75e658e0cf56578f

gsnaws commented 3 months ago

With 2.18, we can load safetensors checkpoints directly, without the need to save split files. Please give it a try and let us know!. Please refer to https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/transformers-neuronx-developer-guide.html#checkpoint-support-and-automatic-model-selection for more details.

dacorvo commented 1 month ago

Confirmed the issue is now closed.