huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.33k stars 26.86k forks source link

Additional options for include_num_input_tokens_seen in Trainer #32469

Open iarbel84 opened 2 months ago

iarbel84 commented 2 months ago

Feature request

Track only the training, avoiding the count of padding tokens

Motivation

It appears that this metric also includes padding tokens. If one would use example packing, then it really tracks the “correct” number of tokens seen by the model.

However, I can think of two cases where this will not be accurate:

  1. In cases where packing is not used, training examples are padded to the longest sequence in the batch
  2. For SFT training on completions only

For the first case, a more accurate calculation would be to sum the attention mask. For the second case, I'm not sure how this should be regarded. However, we can consider counting only label tokens != -100

Your contribution

Replace lines 2248-2258 in trainer.py (v4.43.4) with the following:

self.state.num_input_tokens_seen += (
    torch.sum(
        self.accelerator.gather(
            torch.tensor(
                inputs['attention_mask'].sum(), device=self.args.device, dtype=torch.int64
            )
        )
    )
    .cpu()
    .item()
)
amyeroberts commented 2 months ago

Hi @iarbel84, thanks for opening this feature request!

Makes sense to me - would you like to open a PR with the suggested change?

cc @SunMarc @muellerzr

iarbel84 commented 2 months ago

Yes, I'll be happy to. A few points for discussion:

  1. When it come to training on completions - what should be the behavior? Are we interested in counting only the tokens that count towards the loss, or all the tokens?
  2. Currently, the count uses the model's "main_input_name" property, and defaults to "input_ids". Is it acceptable to simply use "attention_mask" in the new feature?
  3. Are we looking to change the behavior of the current feature, or add additional new options?
amyeroberts commented 1 month ago

Hi @iarbel84, apologies for the delay in response.

Let's get the input from @SunMarc or @muellerzr here.

In general, for

  1. I would keep the behaviour which is most consistent with what's currently happening (which I believe is count all the tokens).
  2. I think you'll have to do some checks that "attention_mask" is provided, although I think it should be possible to use. You'll probably need to account for whether it's in the 2d format or not.
  3. I would add a new option -- otherwise people's metrics will change under their feet. It might be that adding this introduces more complexity into trainer and we'll need to think about either abstracting out to make it easy to swap; or leaving the PR as a feature branch people can checkout and use without it being merged into main.