Closed nivibilla closed 5 months ago
Hi @nivibilla!
Thank you for your pull request and welcome to our community.
In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed
. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!
This is ready for review @yanboliang @Chillee It's a quick followup to the previous llama 3 pr.
Thanks!
I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.
I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.
That would be awesome! And thanks for doing this!
how can we run the llama 3 70b ? I used your own pre-converted repo and it cannot find .pth file
You have to convert it as usual with the instructions on the readme
it works ! thanks, fyi, im getting 39 tks/s on 4xH100.
https://github.com/pytorch-labs/gpt-fast/pull/158#issuecomment-2091144762
According to this it should work with safetensors too but I haven't had the time to make the changes and test.
@nivibilla Do you have time to work on this soon ?
@jerrymannil sorry, forgot about this. Ive made the changes and tested with llama 3 8b. Works fine. We no longer need the pytorch converted files. Ive tested it with the official Meta repo and i am able to convert and run. Actually this should allow for all safetensor models not just llama 3.
Thanks @Artyom17 and @jerrymannil
this PR is ready to be reviewed/merged
@yanboliang @Chillee could you pls have a look. Thanks
@yanboliang @Chillee could you have a look pls. Thanks
Hi, thanks for this PR and this codebase!
I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with rope_base=50000
for both llama3 models. I would suggest changing the config here:
https://github.com/pytorch-labs/gpt-fast/blob/bbeff35e101d2388a8e01137ed6d943c4b1a1758/model.py#L68
Hi, thanks for this PR and this codebase!
I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with
rope_base=50000
for both llama3 models. I would suggest changing the config here:https://github.com/pytorch-labs/gpt-fast/blob/bbeff35e101d2388a8e01137ed6d943c4b1a1758/model.py#L68
@nivibilla Can you look ? Thanks.
Thanks @xavierpuigf for testing, ive made the changes.
And thanks @jerrymannil for the ping, must have missed this email.
@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?
@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?
That's right, I had a typo. rope_base should be 500K. Thanks for checking!
Updated
@Chillee @yanboliang Can one of you approve?
As discussed in #158
This PR unifies the support for llama 3
however you must convert the model files from the safe tensors format to the PyTorch.bin format.Can be done by:model.save_pretrained('/llama-3-70b-instruct-hf-pt", safe_serialization=False)
I have pre converted and uploaded the PyTorch.bin versions of the llama-3-8b-instruct and the llama-3-70b-instruct for use.UPDATE : Thanks to @jerrymannil for the safetensors tip. We can now load from safetensors directly and use the official meta repos. And in general support for other safetensor versions of models not just llama 3.
Some performance numbers on 8xA10
python generate.py --compile --checkpoint_path ./llama-3-8b-instruct-hf-pt/model.pth
Thanks @Artyom17 for the initial implementation (especially the tokenizer changes!)