pytorch-labs / gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
BSD 3-Clause "New" or "Revised" License
5.58k stars 509 forks source link

Speed up model loading by 25% #23

Closed daulet closed 4 months ago

daulet commented 10 months ago

Load checkpoint directly to device, in my testing loading llama 7B went from 7.83 to 5.78 seconds (about 25% faster). I also noticed that memory usage doubles temporarily, at least until compilation finishes. My understanding is that it's due to storing the checkpoint in float16, so at loading time when we mmap it and assign we still have to cast them to bfloat16, which probably results in more allocation. Hence the change to convert_hf_checkpointto store weights in bfloat16 which is what is used by in generate by default. After re-exporting .pth files I no longer observed temporary 2x spike in GPU memory.

Also fix #22.

facebook-github-bot commented 10 months ago

Hi @daulet!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

facebook-github-bot commented 10 months ago

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!