huggingface / transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.87k stars 26.5k forks source link

Community contribution: enable dynamic resolution input for more vision models. #30579

Open amyeroberts opened 5 months ago

amyeroberts commented 5 months ago

Feature request

Some of our models interpolate its positional embeddings, enabling pretrained checkpoints to be used on different input resolutions. For example, here in ViT.

Motivation

Let's add this to more models, to leverage existing checkpoints for new cases!

Your contribution

For anyone who would like to contribute, please comment on the issue, claiming a model you'd like to work on and share a link to the PR.

Each PR should:

There was a PR opened to add this to CLIP models, which is now inactive, but useful for reference of the changes to make: https://github.com/huggingface/transformers/pull/27457

Once the PR is ready, you can ping me for review πŸ€—

ashvinnihalani commented 5 months ago

I can take Clip and Blip2.

NielsRogge commented 5 months ago

Some heads up here; people have complained about the fact that interpolate_pos_encoding cannot be passed when using the Trainer to train models on higher-resolution images. I also am not that happy I named it interpolate_pos_encoding, should have been called interpolate_position_embeddings.

bhuvanmdev commented 5 months ago

i can work on vit_mae and tvp

amyeroberts commented 5 months ago

Thanks for the heads up @NielsRogge!

Some heads up here; people have complained about the fact that interpolate_pos_encoding cannot be passed when using the Trainer to train models on higher-resolution images

OK, that's good to know. If many models have this it's a good reason to spend some time to figure out a solution! The most important thing is that it will work with a standard forward / backwards pass - if that's working we should be able to find a way to integrate if it's a wanted feature.

I also am not that happy I named it interpolate_pos_encoding, should have been called interpolate_position_embeddings.

Agreed interpolate_position_embeddings would have been better originally. Now interpolate_pos_encoding is implemented across the library I'd say it's better to stick with it to be consistent.

NielsRogge commented 5 months ago

Yes so the problem is that the Trainer does not allow to pass any keyword arguments to the forward of a model.

However, there's a workaround: https://discuss.huggingface.co/t/fine-tuning-vit-with-more-patches-higher-resolution/18731/4?u=nielsr

the-neural-networker commented 5 months ago

I can work on deit!

jla524 commented 5 months ago

I'd like to work on vivit

faiez22222 commented 5 months ago

I can take Clip and Blip2.

Hi ashavinni i am new to open source , can you help me little to get started with this task

davidgxue commented 5 months ago

I can work on chinese_clip. Will keep the team posted in the next few days. If I get more free time and there are remaining ones by then, happy to help out on additional tasks.

g1y5x3 commented 5 months ago

Working on detr, a bit tricky. Will explain in the PR.

davidgxue commented 5 months ago

Actually, I can also take bridgetower as well. They will come in as separate PRs though. Shouldn't be more complicated than chinese_clip. So recap: I will work on both bridgetower and chinese_clip.

nileshkokane01 commented 5 months ago

@amyeroberts ,

How you manage this with make fix-copies , as most of the models are copied from CLIP and eventually we end up changing models that others have claimed for . I did change Git but that is copied from CLIP and that inturn triggers cascading changes.

Or avoid `make fix-copies' altogether before sending a PR?

the-neural-networker commented 5 months ago

I will work on Swin, since DeiT is already implemented.

yMayanand commented 5 months ago

I will work on owlvit.

amyeroberts commented 4 months ago

@nileshkokane01 This is a good point - I'll update the list of models to indicates models which are "grouped" together. In the case of e.g. the CLIP family, there should just be one PR opened for adding the feature to CLIP and the models which are copied from it. The steps would be:

davidgxue commented 4 months ago

@nileshkokane01 @amyeroberts In that case, I will refrain from working on chinese_clip and bridgetower since both have # Copied from transformers.models.clip.modeling_clip.CLIPVisionEmbeddings with CLIP in the comments. I think Kosmos 2 may also be copied from CLIP. Most likely a fair amount on the list will be inheriting from CLIP (just as a heads up to other folks)

Update: oh nice thank you Amy for updating the description to group them

davidgxue commented 4 months ago

I can take siglip. I think some functions are still copied from CLIP but just skimming it, doesn't seem like they will be related to interpolate position encoding code

zafstojano commented 4 months ago

@amyeroberts Doesn't idefics2 already handle this?

https://github.com/huggingface/transformers/blob/cf7bed98325a0be9d195cb6b66c6a0bef9fccbc8/src/transformers/models/idefics2/modeling_idefics2.py#L139-L149

For example, the following sample script:

import torch
import requests
from PIL import Image
from transformers import Idefics2Processor, Idefics2ForConditionalGeneration

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

url = "https://upload.wikimedia.org/wikipedia/commons/c/cc/ESC_large_ISS022_ISS022-E-11387-edit_01.JPG"
images = [Image.open(requests.get(url, stream=True).raw)]
messages = [{
    "role": "user",
    "content": [
        {"type": "text", "text": "What's on the image?"},
        {"type": "image"},
    ],
}]

processor = Idefics2Processor.from_pretrained("HuggingFaceM4/idefics2-8b", do_image_splitting=False)
# Instead of the default 980, allow the largest edge to be 1500
processor.image_processor.size["longest_edge"] = 1500 

model = Idefics2ForConditionalGeneration.from_pretrained("HuggingFaceM4/idefics2-8b").to(device)

text = processor.apply_chat_template(messages)
inputs = processor(text=text, images=images, return_tensors="pt", padding=True)
for k, v in inputs.items():
    inputs[k] = v.to(device)

print("Input image shape:", inputs["pixel_values"].shape)

with torch.no_grad():
    out = model(**inputs)

print("Finished!")

Executes without errors and prints the following:

Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:03<00:00,  2.26it/s]
Input image shape: torch.Size([1, 1, 3, 994, 1500])
Finished!
bhuvanmdev commented 4 months ago

Since all clip like models can just borrow changes made to clip model, I will take tvp instead of altclip.

amyeroberts commented 4 months ago

@zafstojano Indeed! That's what I get for doing a quick grep and not double checking. Thanks for showing an example to verify. I'll take it off the list

davidgxue commented 4 months ago

Completed the PR #30719. I actually realized: Should I be referencing this issue directly in my PR? Because if any of our PRs merge then it may end up closing this issue. Should we make a child issue stemming from this instead?

amyeroberts commented 4 months ago

@davidgxue Good question! If instead of having 'Fixes #xxx' the PR says something else 'Addresses #xxx' or just mentions this issue then it will be linked and the issue won't be closed upon merge. It's not a big deal if it's closed accidentally, other than additional notifications as I can just re-open it.

zafstojano commented 4 months ago

Opened a PR (#30722) addressing this issue for the BLIP family of models (BLIP, BLIP2, InstructBLIP).

kyrajeep commented 4 months ago

@amyeroberts I would like to work on DETR. Is anyone working on it?

g1y5x3 commented 4 months ago

@amyeroberts I would like to work on DETR. Is anyone working on it?

I'm almost done. Was busy with work in the past 2 weeks.

M-Ali-ML commented 4 months ago

I'll be working on grounding_dino and hopefuly I will have a PR soon.

amyeroberts commented 4 months ago

@MightyStud Thanks for picking a model and working to add this feature! After reviewing #30921, I realised that this isn't something we can add for models with backbones, which includes grounding DINO and DETR related models. I've updated the list to reflect this.

M-Ali-ML commented 4 months ago

@amyeroberts Aha, thanks for letting me know, I'd like to work on swin2sr then since I already allocated time this week.

OmarManzoor commented 4 months ago

Hi @amyeroberts Can I try out beit or data2vec?

amyeroberts commented 4 months ago

@OmarManzoor Certainly!

kishore-s-15 commented 4 months ago

@amyeroberts Is there any model that I can work on in this task?

amyeroberts commented 4 months ago

@kishore-s-15 There is currently no open PR for deit

kishore-s-15 commented 4 months ago

Thanks, @amyeroberts, I would love to work on it. Could you assign it for me?

p-kris10 commented 4 months ago

@amyeroberts have opened a PR(#31131) for deit

jacksoncamp42 commented 4 months ago

@amyeroberts Are there any models I can work on for this task?

amyeroberts commented 4 months ago

@jacksoncamp42 All models currently have open PRs. If you're interested in adding features to vision models, another way to contribute would be adding enabling device_map=auto: #29786

jacksoncamp42 commented 4 months ago

@amyeroberts Thanks for the suggestion. Unfortunately, I currently don't have access to a multi-GPU environment.

Is there another area or feature that I can contribute to without needing a multi-GPU setup?

amyeroberts commented 4 months ago

@jacksoncamp42 Anyone in the community is welcome to tackle any issue within the library. For people who are contributing for the first time, we suggest looking for issues with the Good first issue or Good second issue tags

manuelsh commented 1 week ago

CLIP family models have been tackled (and merged) here: https://github.com/huggingface/transformers/pull/32600