Open PaulFidika opened 1 year ago
Yeah a trained model would be great so that this tech can actually be used.
I totally forgot about this model; has anyone ended up training a version of this? Looks like this codebase hasn't been updated in 5 months.
Has anyone trained a sample model of this? I realize full-scale training on LAION will take quite a bit of resources (SD v2 trained for 200k GPU hours), but I'm wondering (1) are there any publicly-available sample trained models yet, and (2) any estimate on the resources required for a full-scale training? I'm guessing GigaGan will require less training than Stable Diffusion did.