Alpha-VLLM / Lumina-mGPT

Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraining"
https://arxiv.org/abs/2408.02657
508 stars 22 forks source link

Training the model while keeping the chameleon tokenizer #26

Closed KaiyueSun98 closed 2 months ago

KaiyueSun98 commented 2 months ago

Hi, Thanks for your great work! It seems that the training script lumina_mgpt/finetune_solver.py has set the tokenizer randomly normal initialized, I'd like to ask how to initialize the image tokenizer with the chameleon tokenizer as in the inference script? Thanks