Closed sayakpaul closed 1 year ago
I'm not well versed with Git
as a model.
Pipelines are usually agnostic to actual models. As long as model X is AutoModelForVision2Seq
it should work out of the box.
If the architecture is different, we can discuss what's done and how to implement. The rule of thumb:
timestamps
in audio recognition is useful to do automated subtitling, it doesn't matter how models actually implement it)Fair enough!
I guess the main reason the pipeline is acting weird could be that the model is loaded into AutoModelForCasualLM
. Looking at the source code, it's not implemented for AutoModelForVision2Seq
I think.
Yes that's exactly it. In the absence of tags the hub will check the config and assign a pipeline based on architecture format ForXX
, just like the pipeline does.
Do you have a sample script to make it work for captionning ?
Do you have a sample script to make it work for captionning ?
If you check the Colab Notebook I linked above, you will see it at the end (the inference section).
The pipeline currently only supports classes that are instances of VisionEncoderDecoderModel
.
GitForCausalLM
isn't supported, as that would require extending the image-to-text pipeline.
Seems to me that the colab does pretty much what the pipeline does:
Any reason not to implement ForVision2Seq
?
Yeah that is what my understanding is as well. Maybe @NielsRogge can provide more on
Any reason not to implement ForVision2Seq ?
Any reason not to implement ForVision2Seq ?
The image-to-text pipeline currently only supports the MODEL_FOR_VISION_2_SEQ_MAPPING
as seen here (hence, the AutoModelForVision2Seq
class), however GIT is a special model that is part of the MODEL_FOR_CAUSAL_LM_MAPPING
. The reason it's only defined in this mapping is because GitForCausalLM
can technically also be used as a text-only generative model like GPT-2.
But in practice, GitForCausalLM
is only used for image (and video)-to-text use cases. It is a custom model but has the same API as the AutoModelForVision2Seq
class (it takes pixel_values
as input to the generate
method). So it wouldn't make sense to add the MODEL_FOR_CAUSAL_LM_MAPPING to this pipeline, rather it probably makes sense to have GitForCausalLM as a standalone class in this pipeline, but I'm not sure this is feasible/allowed.
It is a custom model but has the same API as the AutoModelForVision2Seq class
So make it ForVision2Seq
, no ? As long as it upholds the invariant (signature + return type) then by definition it's ok to add it...
I've added BLIP and BLIP-2 to the ForVision2Seq mapping, making them usable with the image-to-text pipeline: https://github.com/huggingface/transformers/pull/21802.
However, GIT can't be added out-of-the-box, due to GitForCausalLM
not having pixel_values
as the first argument in its forward call, but rather input_ids
(unlike models like VisionEncoderDecoderModel
, BlipForConditionalGeneration
, etc).
What are those input_ids
corresponding to ?
If they are LongTensor
like regular input ids, where did the image go ? Does it need a combination or not ?
GIT is a bit special in the sense that it can be viewed as a GPT-2 model, taking input_ids
as input (the text prompt), and one can optionally provide pixel_values
to condition the model on an image.
pixel_values
, then it will caption the image.input_ids
, then it will behave like GPT-2.Shouldn't we implement GitForVision2Seq
in the first case ? It's a classic encoder-decoder
case, correct ?
No GitForCausalLM
should ideally be added directly to the Vision2Seq mapping. It's a decoder-only Transformer. The only reason we can't add it directly is that it doesn't take pixel_values
as first keyword argument, which is what the image-to-text pipeline expects.
IT seems input_ids
is not necessary: https://colab.research.google.com/drive/1sLiSY2ixv52yqlw9EUW6BPwCq9XPLc9R#scrollTo=b3XKBvEcU_PR&line=2&uniqifier=1
No ?
If it's a regular decoder for the text, then the decoder_input_ids
should automatically be set by generate
making GitForVision2Seq
possible. No ?
Yes correct, input_ids
is an optional argument, which can serve as text prompt. If you don't provide one, the model will start generating text from the BOS token with id = 1.
But the problem is here. The inputs, prepared using the image processor, will be pixel_values
, but GitForCausalLM has input_ids
as first keyword argument.
Oh this code is already not looking pretty, there could be a way to make it better.
But we could always add
GitForVision2Seq(GitForCausalLM):
def forward(self, pixel_values, ***):
return super().formward(self, pixel_values=pixel_values)
for isntance?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
I have removed the Good first issue label as there is no clear plan explaining to a beginner what to do to solve this issue. Please add this if you want to re-put that label.
If @Narsil agrees, we can add the GitForVision2Seq
class to make GIT be supported by the pipeline.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
To fix this issue, one can:
GitForVision2Seq
as explained above to transformers/models/git/modeling_git.pyThis issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
This was fixed by #23362
I have fine-tuned the microsoft/git-base model on image cpationing (Colab Notebook).
I am trying to use the model with 🤗 Pipelines:
It only spits:
If you check the Colab Notebook, you will notice that it works okay when the inference is performed explicitly i.e., without pipelines. Is it because the architecture tagged with the model is
GitForCausalLM
?Also note that on the model repo, there is a tag "Image To Text" WHICH I HAVE MANUALLY ADDED to see if that has any effect. By default, the model gets tagged as a text generation model.
@Narsil is it out of scope to support this model in an image to text generation pipeline?