For the model(**inputs, output_attentions=True), the output attention tensors has size (12, batch_size, 12, 577, 577), it looks like the self attention for image patches, what does the first "12" represents here? Since the Blip spaces on huggingface say that it is the attention matrix for all layers.
For the model(**inputs, output_attentions=True), the output attention tensors has size (12, batch_size, 12, 577, 577), it looks like the self attention for image patches, what does the first "12" represents here? Since the Blip spaces on huggingface say that it is the attention matrix for all layers.