I've read the paper and in theory, I should be able to more or less exchange the generator and configure the latent-space dimensionality and then have the matrix of possible directions trained by the resnet18 reconstructor that you used for proggan/stylegan2.
Do you think it's reasonable to adapt the code base to the stylegan2-ada-pytorch model? And can you possibly give me a hint on the code in the /models directory, in case I want to give it a try myself? I'm not entirely sure about the purpose of those. Happy to help rewrite code though. :)
EDIT: Sorry for duplication, didn't see the closed issue earlier.
I've read the paper and in theory, I should be able to more or less exchange the generator and configure the latent-space dimensionality and then have the matrix of possible directions trained by the resnet18 reconstructor that you used for proggan/stylegan2.
Do you think it's reasonable to adapt the code base to the stylegan2-ada-pytorch model? And can you possibly give me a hint on the code in the /models directory, in case I want to give it a try myself? I'm not entirely sure about the purpose of those. Happy to help rewrite code though. :)
EDIT: Sorry for duplication, didn't see the closed issue earlier.