salesforce / BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
BSD 3-Clause "New" or "Revised" License
4.84k stars 646 forks source link

What does the model(**inputs, output_attentions=True) output? #219

Open ZhanliangAaronWang opened 1 month ago

ZhanliangAaronWang commented 1 month ago

For the model(**inputs, output_attentions=True), the output attention tensors has size (12, batch_size, 12, 577, 577), it looks like the self attention for image patches, what does the first "12" represents here? Since the Blip spaces on huggingface say that it is the attention matrix for all layers.