could you please release the code or pretained direction for image editing base on SemanticStyleGAN. I find this in the paper: "For both generators, we randomly synthesize 50,000 images for labeling." We want to try to attributes editing on SemanticStyleGAN, but the cost of annotation is too high. Could you please provided annotated data.
could you please release the code or pretained direction for image editing base on SemanticStyleGAN. I find this in the paper: "For both generators, we randomly synthesize 50,000 images for labeling." We want to try to attributes editing on SemanticStyleGAN, but the cost of annotation is too high. Could you please provided annotated data.