huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.07k stars 27.02k forks source link

VideoMAEforPretrained cannot be trained with Bfloat16 #27295

Closed ikergarcia1996 closed 1 year ago

ikergarcia1996 commented 1 year ago

System Info

Who can help?

@amyeroberts

Information

Tasks

Reproduction

It is not possible to train VideoMAEForPreTraining with bfloat16, because the labels are always stored as float32. This code snippet triggers the error.

from transformers import AutoImageProcessor, VideoMAEForPreTraining
import numpy as np
import torch

num_frames = 16
video = list(np.random.randint(0, 256, (num_frames, 3, 224, 224)))

image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base")
model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base",torch_dtype=torch.bfloat16).to("cuda")

pixel_values = image_processor(video, return_tensors="pt").pixel_values

num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()

outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos)
loss = outputs.loss

loss.backward()

Full TraceBack

RuntimeError                              Traceback (most recent call last)
Cell In[1], line 20
     17 outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos)
     18 loss = outputs.loss
---> 20 loss.backward()

File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
    482 if has_torch_function_unary(self):
    483     return handle_torch_function(
    484         Tensor.backward,
    485         (self,),
   (...)
    490         inputs=inputs,
    491     )
--> 492 torch.autograd.backward(
    493     self, gradient, retain_graph, create_graph, inputs=inputs
    494 )

File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/autograd/__init__.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    246     retain_graph = create_graph
    248 # The reason we repeat the same comment below is that
    249 # some Python versions print out the first line of a multi-line function
    250 # calls in the traceback and some print out the last line
--> 251 Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    252     tensors,
    253     grad_tensors_,
    254     retain_graph,
    255     create_graph,
    256     inputs,
    257     allow_unreachable=True,
    258     accumulate_grad=True,
    259 )

RuntimeError: Found dtype Float but expected BFloat16

The problem is that when computing the loss, the labels are in float32 therefore, the returned loss is also in float32.

logits: torch.bfloat16
labels: torch.float32
loss: torch.float32

Expected behavior

Labels should be converted to the same dtype as the logits.

This PR #27296 fixes the error. Altough I am not 100% sure that is the best way to handle the problem.

amyeroberts commented 1 year ago

Hi @ikergarcia1996 thanks for reporting and opening a PR!

I've started a review on the PR around implementation specifics and I think once merged that should resolve the issue.

ikergarcia1996 commented 1 year ago

Fixed #27296