FoundationVision / VAR

[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
MIT License
3.78k stars 285 forks source link

请问下两个阶段ablation的细节 #50

Closed YilanWang closed 1 month ago

YilanWang commented 1 month ago

请问作者,如果使用1) multi scale VQVAE(VQGAN) 但是采用VQGAN(taming transfomer)的transformer(clip?), 或者是使用2) VQVAE(VQGAN) 但是gpt-like transformer, 这两种方式下的指标和原始VQGAN相比怎么样呢?

感觉2)有点难做实验,但还是想问问作者有没有做过这样的实验,感谢~~

keyu-tian commented 1 month ago

hi@YilanWang taming transfomer使用的也属于gpt-like transformer,结构是相同的,包括和vit/dit也是基本相同

YilanWang commented 1 month ago

哇,感谢大佬回复,作者要不啥时候搞个直播或者在b站上发个视频讲解呗,我看别的issue大家对细节的关注还挺多的

YilanWang commented 1 month ago

thx the author and I close the issue