j-min / VL-T5

PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
https://arxiv.org/abs/2102.02779
MIT License
357 stars 56 forks source link

Text Generation Maximum Length #32

Open aboggust opened 1 year ago

aboggust commented 1 year ago

Thank you so much for this repo! It has been a pleasure to work with.

I am setting up a chart captioning finetuning task. My dataset contains pairs of chart images and chart scenegraphs (textual representations of the chart spec). I also have ground truth natural language captions.

I have finetuned your pretrained VLT5 model on my data. It is generating informative captions, but the generated captions are much shorter than the ground truth captions. The ground truth captions are on average 450 characters, whereas the generated captions are on average 181 characters.

Would you expect VLT5 to prefer short captions (i.e., because it was pretrained on short text)? Or would you expect I have a parameter set incorrectly? I have set gen_max_length = 512 and max_text_length = 512.

j-min commented 1 year ago

Hi, thanks a lot for your interest!

In my experiments, most of the target text was pretty short (< 20 tokens), so I don't have experience using VL-T5 to generate such long text. Theoretically, the model would learn the distribution in the target data, but LM can often degenerate for various reasons (e.g., trained on small data).

For your use case, how about controlling min_length parameters in generate() method?