pytorch / torchtitan

A native PyTorch Library for large model training
BSD 3-Clause "New" or "Revised" License
2.68k stars 212 forks source link

[WIP] Adding OBELICS DataLoader #663

Open TJ-Solergibert opened 4 weeks ago

TJ-Solergibert commented 4 weeks ago

Hi,

In this PR I present a first draft of the Multimodal DataLoader. First I will describe how the batches are created and then I will explain the padding problem.

image

Let's begin checking the OBELICS dataset. For every sample on the dataset we have 4 keys, but we are just interested in 2 of them:

The format_obelics function will transform each sample to a format that can be later fed into the transform block that will prepare the samples to the target type. Each formatted sample will be a dictionary containing 2 keys:

image

Once formatted, we will process each sample with the transform block. This transform block is composed of CLIPPreprocess, TikTokenizer & VisionCrossAttentionMask modules.

CLIPPreprocess

image

This module will prepare the List of images to be fed into the CLIP model. The most relevant steps is resizing the image without distortion, dividing the image into tiles and padding if necessary. Highlight the fact that it will still produce a List of tensors and NOT a tensor as every image can have a different number of tiles. This will be addressed in the collator where we will pad the image tiles to the largest in the batch. Also, we keep the maximum number of tiles to 4 and the tile size to 448 for pretraining [1], [2].

TikTokenizer

I've included a new method in the tokenizer to encode the multimodal text. In short, it just encodes the text adding the special image_id token and returns both the input_ids & labels masking the bos, eos & image_id tokens.

VisionCrossAttentionMask

image

This module will create the attention mask for the Fused layers. In short, for each TILE we will have 1025 image_tokens and this mask will specify for each text_token to which image_tokens should attend to. We are returning again a List of tensors as the quantity of image_tokens will depend on the number of tiles. Again, we will solve this in the collator.

Padding & the collator

As we've previously seen, both the outputs of the CLIPPreprocess & VisionCrossAttentionMask are list of tensors because of the different number of tiles. Within the same sample we should pad both artifacts to the maximum number of tiles, but the issue arises when we run batch_size > 1 as we will also need to pad the input_ids (& labels) which is relatively cheap BUT also the Number of images, as the input to the CLIP model will be a tensor of shape [Batch size, Number of images, Number of tiles, Channels, Tile size, Tile size]. Padding to the maximum number of tiles is bad, but in the worst case scenario you end up increasing the tensor x4 (from 1 tile to maximum number of tiles = 4). But for the number of images it can get really really big, as there are samples with +30 images.

To check this phenomenon I've included scripts/check_padding_mm.py which computes the % of padding in a sample. Feel free to give it a try but it's very easy to get samples where the majority of the input is padding.

python3 scripts/check_padding_mm.py
Unpadded tokens: 8717, Total tokens in batch: 21728
Padded text tokens: 13011, 59.88%
########################################
Unpadded images: 25, Total images in batch: 64
Padded images: 39, 60.94% (Each image with shape [4, 3, 448, 448])
########################################
Unpadded number of tiles: 61, Total number of tiles: 256
Padded tiles: 195, 68.72% (Each with shape [3, 448, 448])
########################################
Unpadded cross attention mask elements: 545030425, Total cross attention mask elements: 5701427200
Padded cross attention mask elements: 5156396775, 90.44%

That's why I proposed continue working on a DataLoader & Dataset than can pack multiple samples up to a given input_ids length OR number of images in a batch. Packing the input_ids is fairly easy while packing the cross attention masks will require a bit more effort. Let me know if you would be interested on supporting that feature or you just want to include in the repo an example of the multimodal pipeline despite the padding issue described. I also plan including some unit test, to check the generated samples & recovering from failures abilities.

Other comments:

Toni

facebook-github-bot commented 4 weeks ago

Hi @TJ-Solergibert!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

facebook-github-bot commented 4 weeks ago

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!