Open josharian opened 2 months ago
Thanks for the report. @mattdangerw, since the k/v shape are different from the q shape in 2b model, we might want to change the sharding spec for that, eg we could make it (None, data, None) since the first dim is always 1.
(None, data, None)
I am new to this, so definitely don't listen to me too much...but for folks like me struggling to squish this onto consumer GPUs, it'd be nice to have some model parallelism everywhere.
Describe the bug
When attempting to shard a
gemma_2b_en
model across two (consumer-grade) GPUs, I get:The problem is the attention key/value kernels.
gemma_2b_en
decoder layer shapes:gemma_7b_en
decoder layer shapes:Observe that the leading dimension of
decoder_block.*attention.*(key|value).*kernel
is divisible by 2/4/8/16 ingemma_7b_en
but not ingemma_2b_en
.Additional context
This was introduced in https://github.com/keras-team/keras-nlp/pull/1491.
layout_map["decoder_block.*attention.*(query|key|value).*kernel"]
was changed from(None, None, model_dim)
to(model_dim, data_dim, None)
.cc @qlzh727 @mattdangerw
There are other issues filed around lora training and the layout_map regular expressions. This the unrelated; this reproduces without lora enabled.
Would you like to help us fix it?
Sure, although I don't know what the preferred fix is. One obvious choice would be to make this not a static method any more, so we can pick optimal layouts for each model size.