huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
128.63k stars 25.51k forks source link

Batch Generation giving different output when using batch size > 1 or when using padding in MambaForCausalLM #31540

Open piyushdevlpr opened 2 weeks ago

piyushdevlpr commented 2 weeks ago

System Info

Who can help?

@ArthurZucker @gante

Information

Tasks

Reproduction

I have trained a MambaForCausalLM model on a custom dataset. I am using the following code to generate next token in eval mode -

model = MambaForCausalLM.from_pretrained("mamba_custom")
tokenizer = PreTrainedTokenizerFast.from_pretrained("mamba_tokenizer_custom")
tokenizer.padding_side = 'left'

inputs_rl = tokenizer(sentence, padding="max_length", truncation=True, max_length=100, return_tensors="pt").to("cuda:0")
with torch.no_grad():
    outputs = model(inputs_rl["input_ids"], attention_mask=inputs_rl["attention_mask"])

Expected behavior

The tokenizer pads the input towards the left side. When I change the argument padding="max_length" to generate inputs without padding, I get different tokens as prediction.

Using model.generate gives the same issue as well.

amyeroberts commented 2 weeks ago

cc @gante

gante commented 2 weeks ago

@piyushdevlpr 👋

Mamba, contrarily to transformers models, does not take an attention mask as input (see the signature here). As such, it does not support padding, and will return different values.

(I'm going to open a PR to try to prevent this issue from happening again)