Closed usmancheema89 closed 1 year ago
Think I figured out the issue. In w_projector.py change:
del G return w_opt.repeat([1, 18, 1])
to
G_map_num_ws = G.mapping.num_ws del G return w_opt.repeat([1, G_map_num_ws, 1])
Great! The code correction you suggested is actually the right way to do this. It makes the code agnostic of the generator number of layers.
I am using a self-trained model trained using the StyleGan-ada pytorch repository. While using use_multi_id_training=True I get a size mismatch error in the forward call of G.synthesis.
The full trace is shown: