pytorch-labs / gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
BSD 3-Clause "New" or "Revised" License
5.68k stars 514 forks source link

Unified Llama 3 (8b,70b) + Safetensors support #169

Closed nivibilla closed 5 months ago

nivibilla commented 7 months ago

As discussed in #158

This PR unifies the support for llama 3

however you must convert the model files from the safe tensors format to the PyTorch.bin format.

Can be done by: model.save_pretrained('/llama-3-70b-instruct-hf-pt", safe_serialization=False)

I have pre converted and uploaded the PyTorch.bin versions of the llama-3-8b-instruct and the llama-3-70b-instruct for use.

UPDATE : Thanks to @jerrymannil for the safetensors tip. We can now load from safetensors directly and use the official meta repos. And in general support for other safetensor versions of models not just llama 3.

Some performance numbers on 8xA10 python generate.py --compile --checkpoint_path ./llama-3-8b-instruct-hf-pt/model.pth

# 70b TP8
Average tokens/sec: 21.79
Memory used: 21.66 GB/GPU

# 8b TP8
Average tokens/sec: 112.74
Memory used: 4.19 GB/GPU

# 8b NO_TP
Average tokens/sec: 34.06
Memory used: 16.43 GB/GPU

Thanks @Artyom17 for the initial implementation (especially the tokenizer changes!)

facebook-github-bot commented 7 months ago

Hi @nivibilla!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

facebook-github-bot commented 7 months ago

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

nivibilla commented 7 months ago

This is ready for review @yanboliang @Chillee It's a quick followup to the previous llama 3 pr.

Thanks!

nivibilla commented 7 months ago

I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.

Artyom17 commented 7 months ago

I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.

That would be awesome! And thanks for doing this!

lightmatmul commented 6 months ago

how can we run the llama 3 70b ? I used your own pre-converted repo and it cannot find .pth file

nivibilla commented 6 months ago

You have to convert it as usual with the instructions on the readme

lightmatmul commented 6 months ago

it works ! thanks, fyi, im getting 39 tks/s on 4xH100.

nivibilla commented 6 months ago

https://github.com/pytorch-labs/gpt-fast/pull/158#issuecomment-2091144762

According to this it should work with safetensors too but I haven't had the time to make the changes and test.

jerrymannil commented 6 months ago

@nivibilla Do you have time to work on this soon ?

nivibilla commented 6 months ago

@jerrymannil sorry, forgot about this. Ive made the changes and tested with llama 3 8b. Works fine. We no longer need the pytorch converted files. Ive tested it with the official Meta repo and i am able to convert and run. Actually this should allow for all safetensor models not just llama 3.

Thanks @Artyom17 and @jerrymannil

this PR is ready to be reviewed/merged

nivibilla commented 6 months ago

@yanboliang @Chillee could you pls have a look. Thanks

nivibilla commented 6 months ago

@yanboliang @Chillee could you have a look pls. Thanks

xavierpuigf commented 6 months ago

Hi, thanks for this PR and this codebase!

I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with rope_base=50000 for both llama3 models. I would suggest changing the config here: https://github.com/pytorch-labs/gpt-fast/blob/bbeff35e101d2388a8e01137ed6d943c4b1a1758/model.py#L68

jerrymannil commented 5 months ago

Hi, thanks for this PR and this codebase!

I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with rope_base=50000 for both llama3 models. I would suggest changing the config here:

https://github.com/pytorch-labs/gpt-fast/blob/bbeff35e101d2388a8e01137ed6d943c4b1a1758/model.py#L68

@nivibilla Can you look ? Thanks.

nivibilla commented 5 months ago

Thanks @xavierpuigf for testing, ive made the changes.

And thanks @jerrymannil for the ping, must have missed this email.

musab-mk commented 5 months ago

@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?

xavierpuigf commented 5 months ago

@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?

That's right, I had a typo. rope_base should be 500K. Thanks for checking!

nivibilla commented 5 months ago

Updated

jerrymannil commented 5 months ago

@Chillee @yanboliang Can one of you approve?