Thank you for your great work!
In the stage 1 training mentioned in the paper, is the input of llm images and text,because the description ‘After the pretraining stage, the model is capable of generating images for single text descriptions’ in the article is unimodal?
And,What does ‘The Language Model (LLM) is then tasked with only generating these placeholders for text creation’ mean? Does the stage 1 llm only need to generate placeholders without other token embedding?
Thank you for your great work! In the stage 1 training mentioned in the paper, is the input of llm images and text,because the description ‘After the pretraining stage, the model is capable of generating images for single text descriptions’ in the article is unimodal? And,What does ‘The Language Model (LLM) is then tasked with only generating these placeholders for text creation’ mean? Does the stage 1 llm only need to generate placeholders without other token embedding?