huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.79k stars 26.96k forks source link

TFEncoderDecoderModel loading TF weights issue #14002

Closed ydshieh closed 2 years ago

ydshieh commented 3 years ago

Environment info

Information

The recent added TFEncoderDecoderModel has an issue: In order to load from a PyTorch checkpoint, a workaround is

_model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
_model.encoder.save_pretrained("./encoder")
_model.decoder.save_pretrained("./decoder")
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(
    "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
)

as stated in the documentation. However, saving and reloading won't load the TF weights correctly.

model.save_pretrained("./temp")
model = TFEncoderDecoderModel.from_pretrained("./temp")  # This has an issue.

To reproduce

Steps to reproduce the behavior:

from transformers import EncoderDecoderModel, TFEncoderDecoderModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")

# a workaround to load from pytorch checkpoint
_model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
_model.encoder.save_pretrained("./encoder")
_model.decoder.save_pretrained("./decoder")
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(
    "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
)
# This is only for copying some specific attributes of this particular model.
model.config = _model.config

article = """
    (CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members
    singing a racist chant. SAE's national chapter suspended the students, but University of
    Oklahoma President David Boren took it a step further, saying the university's affiliation
    with the fraternity is permanently done. The news is shocking, but it's not the first time
    SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama,
    five years before the American Civil War, according to the fraternity website. When the war
    began, the group had fewer than 400 members, of which "369 went to war for the Confederate
    States and seven for the Union Army," the website says. The fraternity now boasts more than
    200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20
    "colonies" seeking full membership at universities. SAE has had to work hard to change recently
    after a string of member deaths, many blamed on the hazing of new recruits, SAE national President
    Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists
    more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least
    30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is
    missing numerous incidents from recent months. Among them, according to various media outlets:
    Yale University banned the SAEs from campus activities last month after members allegedly
    tried to interfere with a sexual misconduct investigation connected to an initiation rite.
    Stanford University in December suspended SAE housing privileges after finding sorority members
    attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University
    in November suspended the fraternity for underage drinking. "The media has labeled us as the
    'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being
    coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped
    the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the
    Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters,
    and the fraternity had to close 12 in 18 months over hazing incidents."""

input_dict = tokenizer(article, return_tensors="tf")

output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist()
summary = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
print(summary)

model.save_pretrained("./temp")
model = TFEncoderDecoderModel.from_pretrained("./temp")

output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist()
summary = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
print(summary)

Outputs:

Loading from PT weights as in the workaround

["sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently ...

After saving and reloading the TF weights

['banning figurative banning figurative grandma discontinued keynoteeborgronia encouraged ...

The warning given when reloading the TF weights

Some layers from the model checkpoint at ./temp were not used when initializing TFEncoderDecoderModel: ['bert/encoder/layer_._9/output/dense/bias:0' ...

Some layers of TFEncoderDecoderModel were not initialized from the model checkpoint at ./temp and are newly initialized: ['encoder/bert/encoder/layer_._3/attention/self/key/bias:0' ...

Expected behavior

The weights should be loaded correctly, and the outputs should be exactly the same.

Remark

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.