huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.41k stars 26.37k forks source link

encoder_outputs are always the same when generating with different inputs #5489

Closed bobshih closed 4 years ago

bobshih commented 4 years ago

❓ Questions & Help

Details

HI,

I've trained a bert2bert model to generate answers with different questions. But after training, the bert2bert model always produces the same encoder_outputs with different inputs. Does anyone know how to fix or avoid the problem? If I dont resize the bert's embedding size, will this solve the problem?

Thanks in advance.

The problem arises when using:

The tasks I am working on is:

Environment info

Below is my training code. The inputs are turned to indices by tokenizer.encode_plus

import logging
import os
import sys
import inspect
import json
import argparse
from dataclasses import dataclass, fields

from tqdm.auto import tqdm, trange
import torch
from torch.utils.data import DataLoader
from transformers import (
    EncoderDecoderModel,
    AdamW,
    get_linear_schedule_with_warmup,
    BertTokenizer,
    PreTrainedModel
)

# import utils
logger = logging.getLogger(__name__)

@dataclass
class training_args:
    weight_decay: float = 0.0
    learning_rate: float = 5e-5
    adam_epsilon: float = 1e-8
    warmup_steps: int = 0
    gradient_accumulation_steps: int = 1
    # num_train_epochs: 10
    max_grad_norm: float = 1.0
    early_stop:  float = 1e-5
    stop_barrier: float = 1e-5

def set_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--vocab_file", default='vocab_trad_clean.txt')
    # parser.add_argument("--encoder_config", default='Configs/encoder.json')
    # parser.add_argument("--decoder_config", default='Configs/decoder.json')
    parser.add_argument("--data_folder", required=True)
    # parser.add_argument("--output_folder", required=True)
    # parser.add_argument("--from_pretrained", action='store_true')
    parser.add_argument("--logging_steps", default=1000, type=int)
    parser.add_argument("--save_total_limit", default=5, type=int)
    parser.add_argument("--save_steps", default=10000, type=int)
    parser.add_argument("--batch_size", default=20, type=int)
    parser.add_argument("--num_train_epochs", default=30, type=int)
    args = parser.parse_args()
    return args

class Generator_Data(Dataset):
    def __init__(self, data):
        super(Generator_Data, self).__init__()
        self.inputs = []
        self.outputs = []
        for example in data:
            self.inputs.append(example['source'])
            self.outputs.append(example['target'])

    def __len__(self):
        return len(self.inputs)

    def __getitem__(self, index):
        return self.inputs[index], self.outputs[index]

def collate_fn(batch):
    input_dict = {
        "input_ids": [],
        "decoder_input_ids": [],
        "labels": [],
    }
    for data in batch:
        input_data = data[0]
        output_data = data[1]
        input_dict["input_ids"].append(input_data["input_ids"])
        input_dict["decoder_input_ids"].append(output_data["input_ids"])
        input_dict["labels"].append(output_data["input_ids"])
    input_dict = {k: torch.LongTensor(v) for k, v in input_dict.items()}
    return input_dict

def Get_DataLoader(data_file, batch_size, training=False):
    if not os.path.isfile(data_file):
        raise Exception(f"data file [{data_file}] doesn\'t exist in util, LoadDataset")
    logger.info(f"start loading data from {data_file}")
    data = torch.load(data_file)
    dataset = Generator_Data(data)
    logger.info("turn dataset into dataloader")
    if training:
        loader = DataLoader(dataset, batch_size, shuffle=True, collate_fn=collate_fn)
    else:
        loader = DataLoader(dataset, batch_size, shuffle=False, collate_fn=collate_fn)
    return loader

if __name__ == "__main__":
    args = set_args()
    # Setup logging
    logging.basicConfig(
        format="%(asctime)s - %(levelname)s - %(name)s -   %(message)s",
        datefmt="%m/%d/%Y %H:%M:%S",
        level=logging.INFO,
    )

    tokenizer = BertTokenizer.from_pretrained('bert-base-chinese', vocab_file=args.vocab_file)
    tokenizer.add_tokens('[NewLine]')
    tokenizer.add_tokens('[space]')
    args.output_folder = 'Seq2Seq_Transformers/Model/test'
    os.makedirs(args.output_folder, exist_ok=True)
    tokenizer.save_pretrained(args.output_folder)

    model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-chinese", "bert-base-chinese")
    model.encoder.resize_token_embeddings(len(tokenizer))
    model.decoder.resize_token_embeddings(len(tokenizer))
    model.config.encoder.vocab_size = len(tokenizer)
    model.config.decoder.vocab_size = len(tokenizer)
    if torch.cuda.is_available():
        args.device = torch.device("cuda")
        args.n_gpu = torch.cuda.device_count()
    else:
        args.device = torch.device("cpu")
        args.n_gpu = 0
    model.to(args.device)
    if args.n_gpu > 1:
        model = torch.nn.DataParallel(model)
    # loading the data
    train_pt_file = os.path.join(args.data_folder, 'train.pt')
    valid_pt_file = os.path.join(args.data_folder, 'valid.pt')
    train_dataloader = Get_DataLoader(train_pt_file, batch_size=args.batch_size, training=True)
    valid_dataloader = Get_DataLoader(valid_pt_file, batch_size=args.batch_size)
    # Prepare optimizer and schedule (linear warmup and decay)
    t_total = int(len(train_dataloader) // training_args.gradient_accumulation_steps * args.num_train_epochs)
    no_decay = ["bias", "LayerNorm.weight"]
    optimizer_grouped_parameters = [
        {
            "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
            "weight_decay": training_args.weight_decay
        },
        {
            "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
            "weight_decay": 0.0,
        },
    ]
    optimizer = AdamW(optimizer_grouped_parameters, lr=training_args.learning_rate, eps=training_args.adam_epsilon)
    scheduler = get_linear_schedule_with_warmup(
        optimizer, num_warmup_steps=training_args.warmup_steps, num_training_steps=t_total
    )
    # start training
    logger.info("***************************")
    for field in fields(training_args):
        logger.info(f"{field.name}: {getattr(training_args, field.name)}")
    logger.info("***************************")
    global_step = 0
    tr_loss = 0.0
    logging_loss = 0.0
    loss_scalar = 1000000
    previous_loss_scaler = -1
    model.train()
    model.zero_grad()
    for epoch in tqdm(range(args.num_train_epochs), desc="Epoch", ascii=True):
        epoch_iterator = tqdm(train_dataloader, desc="Iteration", ascii=True)
        for step, inputs in enumerate(epoch_iterator):
            model.train()
            for k, v in inputs.items():
                inputs[k] = v.to(args.device)

            outputs = model(**inputs)
            # loss, outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=inputs["input_ids"], lm_labels=inputs["input_ids"])[:2]
            loss = outputs[0]  # model outputs are always tuple in transformers (see doc)

            if args.n_gpu > 1:
                loss = loss.mean()  # mean() to average on multi-gpu parallel training
            if training_args.gradient_accumulation_steps > 1:
                loss = loss / training_args.gradient_accumulation_steps

            loss.backward()
            tr_loss += loss.item()

            if (step + 1) % training_args.gradient_accumulation_steps == 0 or (
                # last step in epoch but step is always smaller than gradient_accumulation_steps
                len(epoch_iterator) <= training_args.gradient_accumulation_steps
                and (step + 1) == len(epoch_iterator)
            ):
                torch.nn.utils.clip_grad_norm_(model.parameters(), training_args.max_grad_norm)

                optimizer.step()
                scheduler.step()
                model.zero_grad()
                global_step += 1

                if args.logging_steps > 0 and global_step % args.logging_steps == 0:
                    logs = {}
                    loss_scalar = (tr_loss - logging_loss) / args.logging_steps
                    learning_rate_scalar = scheduler.get_last_lr()[0]
                    logs["learning_rate"] = learning_rate_scalar
                    logs["loss"] = loss_scalar
                    logs["loss_difference"] = abs(loss_scalar-previous_loss_scaler)
                    previous_loss_scaler = loss_scalar
                    logging_loss = tr_loss

                    epoch_iterator.write(json.dumps({**logs, **{"step": global_step}}))
                    if loss_scalar < training_args.early_stop:# or logs["loss_difference"] < training_args.stop_barrier:
                        break

                if args.save_steps > 0 and global_step % args.save_steps == 0:
                    # Save model checkpoint
                    output_dir = os.path.join(args.output_folder, f"checkpoint-{global_step}")
                    os.makedirs(output_dir, exist_ok=True)
                    logger.info("Saving model checkpoint to %s", output_dir)
                    # Save a trained model and configuration using `save_pretrained()`.
                    # They can then be reloaded using `from_pretrained()`
                    if isinstance(model, torch.nn.DataParallel):
                        model = model.module
                    if not isinstance(model, PreTrainedModel):
                        raise ValueError("Trainer.model appears to not be a PreTrainedModel")
                    model.save_pretrained(output_dir)

                    torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
                    torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
                    logger.info("Saving optimizer and scheduler states to %s", output_dir)
        if loss_scalar < training_args.early_stop:
            break

    output_dir = args.output_folder
    os.makedirs(output_dir, exist_ok=True)
    logger.info("Saving model checkpoint to %s", output_dir)
    # Save a trained model and configuration using `save_pretrained()`.
    # They can then be reloaded using `from_pretrained()`
    if isinstance(model, torch.nn.DataParallel):
        model = model.module
    if not isinstance(model, PreTrainedModel):
        raise ValueError("Trainer.model appears to not be a PreTrainedModel")
    model.save_pretrained(output_dir)

Besides, for each time step, encoder_outputs are the same, like the picture below. I think it's very strange. I am not sure if they are the same problems.

image

patrickvonplaten commented 4 years ago

Hmm, this will be hard to debug here. I'm currently working on getting a working example of a Bert2Bert model, so I will keep an eye on encoder_output bugs! See conversation here: https://github.com/huggingface/transformers/issues/4443#issuecomment-656691026

bobshih commented 4 years ago

Thank you for your reply. I am looking forward your Bert2Bert example. And I hope we can solve this problem.

patrickvonplaten commented 4 years ago

Hey @bobshih,

Training a Bert2Bert model worked out fine for me - I did not experience any bugs related to encoder_outputs. You can check out the model and all the code to reproduce the results here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16

Maybe you can take a look, adapt your code and see whether the error persists :-)

bobshih commented 4 years ago

OK, thank for your attention. I will adapt my code after finishing my work at hand.

bobshih commented 4 years ago

Hi, @patrickvonplaten, I have trained EncoderDecoderModel with your training example script. I noticed that if there are too many padding tokens in training data, it will make the trained model produce the same vectors despite the different inputs. but I wonder why attention mask does not work? In my original training setting, there are 93% padding tokens. After I reduce the max length and make padding tokens decrease to 21%, the encoderdecoder model works without problems.

patrickvonplaten commented 4 years ago

This line:

https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script:

    batch["labels"] = [
        [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
    ]

in the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model.

bobshih commented 4 years ago

This line:

https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script:

    batch["labels"] = [
        [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
    ]

in the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model.

Yes, I understand what you mention, and I also use this setting for models after adapting my script, but the problem shows again. I will train the model again with this setting in the weekend. And I hope there will be a different result. Again, thank you very much for solving the problem and patience.