Aleph-Alpha / magma

MAGMA - a GPT-style multimodal model that can understand any combination of images and language. NOTE: The freely available model from this repo is only a demo. For the latest multimodal and multilingual models from Aleph Alpha check out our website https://app.aleph-alpha.com
MIT License
469 stars 55 forks source link

```build_labels``` includes masked image tokens? #46

Open RealNicolasBourbaki opened 1 year ago

RealNicolasBourbaki commented 1 year ago

Hi Authors,

in these lines, the function build_labels masked all the labels in positions up to the seq length of the embeddings. What differences would it make if one just use the caption?

To be more specific, now the code build a label with first part of the sequence (which has sequence length the same as the image) all set to -100, then the second part would be the actual text labels. Why would we need all the -100s? Why couldn't we just use text label ids?

Thanks a lot!

CoEich commented 11 months ago

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

RealNicolasBourbaki commented 11 months ago

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

Yeah I get it. But why mask at all? Why not just cut that part out and only use the text that you are predicting?

CoEich commented 10 months ago

I guess you are right, one could also truncate the logits and labels from the left for the amount of image positions.

At least we need some logic to mask the right-padding: since the amount of right-padding varies per batch item we cannot simply cut this part without breaking up the batch which would be awkward. Using the same function to mask the image positions seems like the most clean solution to me.

In the end it is a matter of taste :-)