ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.22k stars 9.65k forks source link

Add support for DBRX models: dbrx-base and dbrx-instruct #6344

Closed maziyarpanahi closed 6 months ago

maziyarpanahi commented 7 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Feature Description

Databricks just released 2 new models called DBRX (base and instruct). They have their own architecture:

{
  "architectures": [
    "DbrxForCausalLM"
  ],
  "attn_config": {
    "clip_qkv": 8,
    "kv_n_heads": 8,
    "model_type": "",
    "rope_theta": 500000
  },
  "auto_map": {
    "AutoConfig": "configuration_dbrx.DbrxConfig",
    "AutoModelForCausalLM": "modeling_dbrx.DbrxForCausalLM"
  },
  "d_model": 6144,
  "emb_pdrop": 0.0,
  "ffn_config": {
    "ffn_hidden_size": 10752,
    "model_type": "",
    "moe_jitter_eps": 0,
    "moe_loss_weight": 0.05,
    "moe_num_experts": 16,
    "moe_top_k": 4
  },
  "initializer_range": 0.02,
  "max_seq_len": 32768,
  "model_type": "dbrx",
  "n_heads": 48,
  "n_layers": 40,
  "output_router_logits": false,
  "resid_pdrop": 0.0,
  "router_aux_loss_coef": 0.05,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.38.2",
  "use_cache": true,
  "vocab_size": 100352
}

Motivation

These models are superior to the predecessors like Llama-2 or Mixtral (even though they are larger), the community can really benefit from these two and the fine-tuned models that come after.

https://huggingface.co/databricks/dbrx-instruct

Possible Implementation

If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.

python llama.cpp/convert-hf-to-gguf.py

Traceback (most recent call last):
  File "/llama.cpp/convert-hf-to-gguf.py", line 2099, in <module>
    main()
  File "/llama.cpp/convert-hf-to-gguf.py", line 2079, in main
    model_class = Model.from_model_architecture(hparams["architectures"][0])
  File "/llama.cpp/convert-hf-to-gguf.py", line 215, in from_model_architecture
    raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'DbrxForCausalLM' not supported!

python llama.cpp/convert.py

  File "/llama.cpp/convert.py", line 1486, in <module>
    main()
  File "/llama.cpp/convert.py", line 1422, in main
    model_plus = load_some_model(args.model)
  File "/llama.cpp/convert.py", line 1291, in load_some_model
    model_plus = merge_multifile_models(models_plus)
  File "/llama.cpp/convert.py", line 747, in merge_multifile_models
    model = merge_sharded([mp.model for mp in models_plus])
  File "/llama.cpp/convert.py", line 726, in merge_sharded
    return {name: convert(name) for name in names}
  File "/llama.cpp/convert.py", line 726, in <dictcomp>
    return {name: convert(name) for name in names}
  File "/llama.cpp/convert.py", line 701, in convert
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
  File "/llama.cpp/convert.py", line 701, in <listcomp>
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
KeyError: 'transformer.blocks.0.ffn.experts.mlp.w1'

Dbrx is a mixture-of-experts model, which each FFN is divided into 16 experts and only 4 are activated at any given time. We build on MegaBlocks https://github.com/databricks/megablocks

abhi-mosaic commented 7 months ago

Hi! DBRX researcher here, happy to help out however I can!

The architecture is quite similar to Mixtral, which is already supported in this framework. The modeling source code for DBRX is available on the HF Hub here: https://huggingface.co/databricks/dbrx-instruct/blob/main/modeling_dbrx.py

The main differences vs. Mixtral as far as I can tell:

Please let me know if you have any questions!

abhi-mosaic commented 7 months ago

The model is ~132B params so I think the expected memory usage is:

ggerganov commented 7 months ago

@abhi-mosaic Thanks for the pointers. We do split the experts in separate tensors at the moment, but it is something that we planned to change: https://github.com/ggerganov/llama.cpp/issues/6082

Seems like now is the time do that

maziyarpanahi commented 7 months ago

Thanks @abhi-mosaic for all the complete and detailed explanations.

@ggerganov I have a big server, I can test any PR from 16bit all the way down to 2bit. (I have the model already downloaded and ready)

moshemalawach commented 7 months ago

Same, got big servers with very fast and plentyful ram channels, so can try on CPU all the sizes.

sirus20x6 commented 7 months ago

Put me in coach, I'm ready to play, today.

image

veryvanya commented 7 months ago

happy to test on my server

simsim314 commented 7 months ago

@abhi-mosaic While the llama.cpp guys are working on solving their issue with 16 experts and not 8, I was thinking to quantize 4 bits with the native huggingface BitsAndBytes, still getting an error P.S. This will enable many people with much smaller computational power still, 66GB to run the model. Single H100 or A100, instead of the current 4.

abhi-mosaic commented 7 months ago

@simsim314 take a look at this comment, I think someone found a workaround by:

https://huggingface.co/databricks/dbrx-instruct/discussions/10#660566f14f41c0c7c0e54ab9

peterhgruber commented 7 months ago

The model is ~132B params so I think the expected memory usage is:

Not quite ... they say only 36B parameters are "active on any input", as it is a mixture of experts model.

wrapss commented 7 months ago

Not quite ... they say only 36B parameters are "active on any input", as it is a mixture of experts model.

but the entire model needs to be loaded into memory even if the parameters are not activated

MohamedAliRashad commented 7 months ago

I have the model downloaded in my server if something is added, i can help testing.

RodriMora commented 7 months ago

I have the model downloaded too and can help testing:

image

nkeilar commented 7 months ago

Have a dual 3090 setup and interested in quant to 2bit to see if it will fit in 48GB VRAM, could also test with CPU layers offloaded as running 14900KS. Eric was able to get a The Professor, a 155 Billion parameter model into being able to run on a dual 3090.

ehartford commented 7 months ago

I'll be very excited to see this working

ehartford commented 7 months ago

Is anyone actively working on this issue? If not I can work my network to try to find someone

slaren commented 7 months ago

MoE models will need to be exported with the experts fused into a single tensor after #6387, so it may be better to wait until that is merged before adding new MoE models (it should be soon).

maziyarpanahi commented 7 months ago

MoE models will need to be exported with the experts fused into a single tensor after #6387, so it may be better to wait until that is merged before adding new MoE models (it should be soon).

Many thanks for the ETA and explanation. I actually have couple of MoE models made by MergeKit that behave badly when quantized in GGUF, I am hoping this also can fix that.

That said, I am going to test that PR to see how it works so far. Thanks again.

maziyarpanahi commented 7 months ago

@ggerganov @slaren I can see the PRs are merged, thank you so much for your work.

I have pulled the changes from the master, but I still get KeyError: 'transformer.blocks.0.ffn.experts.mlp.w1' error for convert and DbrxForCausalLM' not supported! for converting hf to gguf.

Is the MoE support for DBRX will be added in another PR?

ggerganov commented 7 months ago

DBRX requires a convert script (convert-hf-to-gguf.py) + graph implementation as usual. See #6074 as an example of what needs to be done for DBRX

maziyarpanahi commented 7 months ago

DBRX requires a convert script (convert-hf-to-gguf.py) + graph implementation as usual. See #6074 as an example of what needs to be done for DBRX

Thank you, I'll see if I can have a look at the Qwen MoE PR and make one for DBRX if I am not beat to it.

phymbert commented 7 months ago

Is someone actively working on this? Any help needed ?

ehartford commented 7 months ago

For the mean time, if you are on mac there is https://huggingface.co/mlx-community/dbrx-instruct-4bit

nkeilar commented 7 months ago

For the mean time, if you are on mac there is https://huggingface.co/mlx-community/dbrx-instruct-4bit

@ehartford Looks like about 70Gb of unified memory. What do you think we could expect the memory requirements to be on cuda in 2bit? My sense is that larger model at lower bitrate seems like a good trade off. Thanks for your insights in advance.

KnutJaegersberg commented 7 months ago

there are already 2 bit exllama weights https://huggingface.co/turboderp/dbrx-instruct-exl2

ehartford commented 7 months ago

on a VRAM-constrained GPU deployment, I'd go with exl2

phymbert commented 7 months ago

@ggerganov or @slaren it looks DBRX has a special tokenizer:

Are we currently supporting this somehow ?

maziyarpanahi commented 7 months ago

@ggerganov or @slaren it looks DBRX has a special tokenizer:

Are we currently supporting this somehow ?

Many thanks for starting this and having a brach for it. I got badly stuck in that tiktoken tokenization! I just don't know how to make a custom tokenization work in Llama.cpp. (I'll contribute to your PR if you need any testing)

FYI: https://github.com/ggerganov/llama.cpp/compare/hp/model/support-dbrx

phymbert commented 7 months ago

@ggerganov or @slaren it looks DBRX has a special tokenizer:

Are we currently supporting this somehow ?

Many thanks for starting this and having a brach for it. I got badly stuck in that tiktoken tokenization! I just don't know how to make a custom tokenization work in Llama.cpp. (I'll contribute to your PR if you need any testing)

FYI: https://github.com/ggerganov/llama.cpp/compare/hp/model/support-dbrx

Yes I dont know how our tokenizer will behave at the moment. We will see if I am able to reach the draft PR step. Thanks

phymbert commented 6 months ago

DBRX License clarification for GGUF

@maziyarpanahi @ggerganov As I have done the conversion to gguf (not tested yet), I am wondering what are the exacts conditions to meet the DBRX License.

Can we upload the GGUF quants on HF, if yes how ? I see few approaches, but I am not a lawyer:

  1. Set the HF license model to databricks-open-model-license (other) as the original model
  2. Set another opensource license (I am not sure if it is allowed) and Attach a Notice file with DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved."
  3. Do not distribute on HF

I have some concerns especially about:

Probably need help from Databricks, @abhi-mosaic ?

ehartford commented 6 months ago

You just copy the original license exactly how they did it in their model card

phymbert commented 6 months ago

You just copy the original license exactly how they did it in their model card

@ggerganov please confirm I can upload it on ggml-org with the above ?

maziyarpanahi commented 6 months ago

DBRX License clarification for GGUF

@maziyarpanahi @ggerganov As I have done the conversion to gguf (not tested yet), I am wondering what are the exacts conditions to meet the DBRX License.

Can we upload the GGUF quants on HF, if yes how ? I see few approaches, but I am not a lawyer:

  1. Set the HF license model to databricks-open-model-license (other) as the original model
  2. Set another opensource license (I am not sure if it is allowed) and Attach a Notice file with DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved."
  3. Do not distribute on HF

I have some concerns especially about:

  • Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement and in the event of a conflict, the terms and conditions of this Agreement shall govern over any such additional or different terms and conditions.
  • You will not use DBRX or DBRX Derivatives or any Output to improve any other large language model (excluding DBRX or DBRX Derivatives).

Probably need help from Databricks, @abhi-mosaic ?

Thanks @phymbert for your work.

phymbert commented 6 months ago

Do you have a PR ready so I can also test it locally?

ggerganov commented 6 months ago

You just copy the original license exactly how they did it in their model card

@ggerganov please confirm I can upload it on ggml-org with the above ?

No need to upload it - in ggml-org we only want to have models that are used by the CI or for other kind of test/demo purposes

phymbert commented 6 months ago

No need to upload it - in ggml-org we only want to have models that are used by the CI or for other kind of test/demo purposes

Noted, deleted