Closed maziyarpanahi closed 6 months ago
Hi! DBRX researcher here, happy to help out however I can!
The architecture is quite similar to Mixtral, which is already supported in this framework. The modeling source code for DBRX is available on the HF Hub here: https://huggingface.co/databricks/dbrx-instruct/blob/main/modeling_dbrx.py
The main differences vs. Mixtral as far as I can tell:
DbrxExpertGLU
layer are fused along the experts , aka transformer.blocks.12.ffn.experts.mlp.w1
, not transformer.blocks.12.ffn.experts.mlp.list_w1.0.weight
. This is what you will find in the .safetensors
weight files, you can see the mapping explicitly here in the model.safetensors.index.json
: https://huggingface.co/databricks/dbrx-instruct/blob/main/model.safetensors.index.json
...mlp.w1
) after they are loaded from disk.DbrxExpertGLU
: https://huggingface.co/databricks/dbrx-instruct/blob/464e701f50aef4c1b59c81fb5667819a5d08e108/modeling_dbrx.py#L749Please let me know if you have any questions!
The model is ~132B params so I think the expected memory usage is:
@abhi-mosaic Thanks for the pointers. We do split the experts in separate tensors at the moment, but it is something that we planned to change: https://github.com/ggerganov/llama.cpp/issues/6082
Seems like now is the time do that
Thanks @abhi-mosaic for all the complete and detailed explanations.
@ggerganov I have a big server, I can test any PR from 16bit all the way down to 2bit. (I have the model already downloaded and ready)
Same, got big servers with very fast and plentyful ram channels, so can try on CPU all the sizes.
Put me in coach, I'm ready to play, today.
happy to test on my server
@abhi-mosaic While the llama.cpp guys are working on solving their issue with 16 experts and not 8, I was thinking to quantize 4 bits with the native huggingface BitsAndBytes, still getting an error P.S. This will enable many people with much smaller computational power still, 66GB to run the model. Single H100 or A100, instead of the current 4.
@simsim314 take a look at this comment, I think someone found a workaround by:
List[nn.Linear]
https://huggingface.co/databricks/dbrx-instruct/discussions/10#660566f14f41c0c7c0e54ab9
The model is ~132B params so I think the expected memory usage is:
- 264GB with 16-bit, already possible using our HF quickstart code: https://huggingface.co/databricks/dbrx-instruct#quickstart-guide
- expected 132GB with 8-bit
- expected 99GB with 6-bit
- expected 66GB with 4-bit
Not quite ... they say only 36B parameters are "active on any input", as it is a mixture of experts model.
Not quite ... they say only 36B parameters are "active on any input", as it is a mixture of experts model.
but the entire model needs to be loaded into memory even if the parameters are not activated
I have the model downloaded in my server if something is added, i can help testing.
I have the model downloaded too and can help testing:
Have a dual 3090 setup and interested in quant to 2bit to see if it will fit in 48GB VRAM, could also test with CPU layers offloaded as running 14900KS. Eric was able to get a The Professor, a 155 Billion parameter model into being able to run on a dual 3090.
I'll be very excited to see this working
Is anyone actively working on this issue? If not I can work my network to try to find someone
MoE models will need to be exported with the experts fused into a single tensor after #6387, so it may be better to wait until that is merged before adding new MoE models (it should be soon).
MoE models will need to be exported with the experts fused into a single tensor after #6387, so it may be better to wait until that is merged before adding new MoE models (it should be soon).
Many thanks for the ETA and explanation. I actually have couple of MoE models made by MergeKit that behave badly when quantized in GGUF, I am hoping this also can fix that.
That said, I am going to test that PR to see how it works so far. Thanks again.
@ggerganov @slaren I can see the PRs are merged, thank you so much for your work.
I have pulled the changes from the master, but I still get KeyError: 'transformer.blocks.0.ffn.experts.mlp.w1'
error for convert and DbrxForCausalLM' not supported!
for converting hf to gguf.
Is the MoE support for DBRX will be added in another PR?
DBRX requires a convert script (convert-hf-to-gguf.py
) + graph implementation as usual. See #6074 as an example of what needs to be done for DBRX
DBRX requires a convert script (
convert-hf-to-gguf.py
) + graph implementation as usual. See #6074 as an example of what needs to be done for DBRX
Thank you, I'll see if I can have a look at the Qwen MoE PR and make one for DBRX if I am not beat to it.
Is someone actively working on this? Any help needed ?
For the mean time, if you are on mac there is https://huggingface.co/mlx-community/dbrx-instruct-4bit
For the mean time, if you are on mac there is https://huggingface.co/mlx-community/dbrx-instruct-4bit
@ehartford Looks like about 70Gb of unified memory. What do you think we could expect the memory requirements to be on cuda in 2bit? My sense is that larger model at lower bitrate seems like a good trade off. Thanks for your insights in advance.
there are already 2 bit exllama weights https://huggingface.co/turboderp/dbrx-instruct-exl2
on a VRAM-constrained GPU deployment, I'd go with exl2
@ggerganov or @slaren it looks DBRX has a special tokenizer:
Are we currently supporting this somehow ?
@ggerganov or @slaren it looks DBRX has a special tokenizer:
Are we currently supporting this somehow ?
Many thanks for starting this and having a brach for it. I got badly stuck in that tiktoken tokenization! I just don't know how to make a custom tokenization work in Llama.cpp. (I'll contribute to your PR if you need any testing)
FYI: https://github.com/ggerganov/llama.cpp/compare/hp/model/support-dbrx
@ggerganov or @slaren it looks DBRX has a special tokenizer:
Are we currently supporting this somehow ?
Many thanks for starting this and having a brach for it. I got badly stuck in that tiktoken tokenization! I just don't know how to make a custom tokenization work in Llama.cpp. (I'll contribute to your PR if you need any testing)
FYI: https://github.com/ggerganov/llama.cpp/compare/hp/model/support-dbrx
Yes I dont know how our tokenizer will behave at the moment. We will see if I am able to reach the draft PR step. Thanks
@maziyarpanahi @ggerganov As I have done the conversion to gguf
(not tested yet), I am wondering what are the exacts conditions to meet the DBRX License.
Can we upload the GGUF quants on HF, if yes how ? I see few approaches, but I am not a lawyer:
databricks-open-model-license (other)
as the original modelNotice
file with DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved."
I have some concerns especially about:
Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement and in the event of a conflict, the terms and conditions of this Agreement shall govern over any such additional or different terms and conditions.
You will not use DBRX or DBRX Derivatives or any Output to improve any other large language model (excluding DBRX or DBRX Derivatives).
Probably need help from Databricks, @abhi-mosaic ?
You just copy the original license exactly how they did it in their model card
You just copy the original license exactly how they did it in their model card
@ggerganov please confirm I can upload it on ggml-org
with the above ?
DBRX License clarification for GGUF
@maziyarpanahi @ggerganov As I have done the conversion to
gguf
(not tested yet), I am wondering what are the exacts conditions to meet the DBRX License.Can we upload the GGUF quants on HF, if yes how ? I see few approaches, but I am not a lawyer:
- Set the HF license model to
databricks-open-model-license (other)
as the original model- Set another opensource license (I am not sure if it is allowed) and Attach a
Notice
file withDBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved."
- Do not distribute on HF
I have some concerns especially about:
Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement and in the event of a conflict, the terms and conditions of this Agreement shall govern over any such additional or different terms and conditions.
You will not use DBRX or DBRX Derivatives or any Output to improve any other large language model (excluding DBRX or DBRX Derivatives).
Probably need help from Databricks, @abhi-mosaic ?
Thanks @phymbert for your work.
Do you have a PR ready so I can also test it locally?
You just copy the original license exactly how they did it in their model card
@ggerganov please confirm I can upload it on
ggml-org
with the above ?
No need to upload it - in ggml-org
we only want to have models that are used by the CI or for other kind of test/demo purposes
No need to upload it - in
ggml-org
we only want to have models that are used by the CI or for other kind of test/demo purposes
Noted, deleted
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Feature Description
Databricks just released 2 new models called DBRX (base and instruct). They have their own architecture:
Motivation
These models are superior to the predecessors like Llama-2 or Mixtral (even though they are larger), the community can really benefit from these two and the fine-tuned models that come after.
https://huggingface.co/databricks/dbrx-instruct
Possible Implementation
If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.
python llama.cpp/convert-hf-to-gguf.py
python llama.cpp/convert.py