Closed 152334H closed 1 day ago
Hi @152334H!
Thank you for your pull request and welcome to our community.
In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed
. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!
this PR fixes the checkpoint conversion scripts for the
.safetensors
checkpoints on https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1This is necessary because
.pt
files and the.safetensors
files on the Mixtral repo actually have different state dict formats. The pytorch ones have the MoE weights concatenated, while the safetensors ones have an explicit ModuleList per layer for the expert weights (which matches the HF impl as well). Additionally, many state dict key names are simply different.safetensors:
normal .pt file: