Open leonardodora opened 5 months ago
Yes, we use it directly. It adapts very quickly and the transformation is visible in about 500 steps. This is consistent with pixart-sigma.
Yes, we use it directly. It adapts very quickly and the transformation is visible in about 500 steps. This is consistent with pixart-sigma.
Thanks for your reply! But if we want to scale up the param number, what should we do first, you suggest? just train a larger new latte model?
Yes, we use it directly. It adapts very quickly and the transformation is visible in about 500 steps. This is consistent with pixart-sigma.
Thanks for your reply! But if we want to scale up the param number, what should we do first, you suggest? just train a larger new latte model?
I think a pixart-alpha should be retrained.
as the vae of opensora is different from Latte, the weights from latte could be able to use directly? Or your team train a latte model from scratch?