huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
133.73k stars 26.72k forks source link

LlamaForSequenceClassification forward method show different results with input_ids/inputs_embeds #34218

Open ChitandaErumanga opened 4 days ago

ChitandaErumanga commented 4 days ago

System Info

transformers 4.44.0

Who can help?

@ArthurZucker

Information

Tasks

Reproduction

llama_tokenizer = AutoTokenizer.from_pretrained("../Meta-Llama-3.2-1B-Instruct",padding_side="right")

llama_tokenizer.pad_token = "<|finetune_right_pad_id|>"

llama_model = LlamaForSequenceClassification.from_pretrained(
    "../Meta-Llama-3.2-1B-Instruct",
    num_labels=1,
    torch_dtype=torch.bfloat16
)
class CustomEmbeddingModel_input_embeds(nn.Module):
    def __init__(self, original_model,tokenizer):
        super(CustomEmbeddingModel_input_embeds, self).__init__()    
        self.original_model = original_model
    def forward(
        self,
        input_ids: Optional[torch.LongTensor] = None,
        attention_mask: Optional[torch.FloatTensor] = None,
        past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
        inputs_embeds: Optional[torch.FloatTensor] = None,
        labels: Optional[torch.LongTensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ):
        if inputs_embeds is None:
            inputs_embeds = self.model.model.embed_tokens(input_ids)
        return self.original_model(
            input_ids=None,
            attention_mask=attention_mask,
            past_key_values=past_key_values,
            inputs_embeds=inputs_embeds,
            labels=labels,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
llama_model_input_embeds=CustomEmbeddingModel_input_embeds(llama_model,llama_tokenizer)

class CustomEmbeddingModel_input_ids(nn.Module):
    def __init__(self, original_model,tokenizer):
        super(CustomEmbeddingModel_input_ids, self).__init__()    
        self.original_model = original_model
    def forward(
        self,
        input_ids: Optional[torch.LongTensor] = None,
        attention_mask: Optional[torch.FloatTensor] = None,
        past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
        inputs_embeds: Optional[torch.FloatTensor] = None,
        labels: Optional[torch.LongTensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ):
        if inputs_embeds is None:
            inputs_embeds = self.model.model.embed_tokens(input_ids)
        return self.original_model(
            input_ids=input_ids,
            attention_mask=attention_mask,
            past_key_values=past_key_values,
            inputs_embeds=None
            labels=labels,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
llama_model_input_ids=CustomEmbeddingModel_input_ids(llama_model,llama_tokenizer)

Expected behavior

https://github.com/huggingface/transformers/blob/3f06f95ebe617b192251ef756518690f5bc7ff76/src/transformers/models/llama/modeling_llama.py#L1314 sequence_lengths is only related to input_ids, when we use inputs_embeds instead, it will be default -1 however, the forward method of LlamaModel doesnt support the input of both input_ids and inputs_embeds

ChitandaErumanga commented 4 days ago

when i tryed to use inputs_embeds, ive used both input_ids and inputs_embeds while setting

        transformer_outputs = self.model(
            None,#input_ids
            attention_mask=attention_mask,
            position_ids=position_ids,
            past_key_values=past_key_values,
            inputs_embeds=inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

in the forward method of class LlamaForSequenceClassification

Wangmerlyn commented 4 days ago

input_embeds not checking pad token

if self.config.pad_token_id is None:
            sequence_lengths = -1
        else:
            if input_ids is not None:
                # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
                sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
                sequence_lengths = sequence_lengths % input_ids.shape[-1]
                sequence_lengths = sequence_lengths.to(logits.device)
            else:
                sequence_lengths = -1

According to the doc string of Llama's modeling file, checking if there's pad token embeds in input_embeds is not implemented due to padding token embed is unknown at this point.

Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch).

However, I was wondering if it's possible to implement a comparison between the provided input_embeds and the embedding for the pad token (retrieved via pad_token_id), rather than relying on simply using the last value in each row. This would allow the model to explicitly identify pad token embeddings, even when input_embeds are used.

LysandreJik commented 4 days ago

cc @ArthurZucker maybe