mit-han-lab / data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
https://arxiv.org/abs/2006.10738
BSD 2-Clause "Simplified" License
1.28k stars 175 forks source link

the D and G values for updating #92

Closed Mshz2 closed 2 years ago

Mshz2 commented 2 years ago

Hi, would you mind to please help on how can I can find where exactly in your code, the D(T(x)), D(T(G(z))) from i, ii, and D(T(G(z)))from iii value are measured?

image

zsyzzsoft commented 2 years ago

In the code you can see T(x) and T(G(z)) in e.g. loss.py. (ii) and (iii) are not explicitly separated in the code as they are done in the optimizer level.

Mshz2 commented 2 years ago

In the code you can see T(x) and T(G(z)) in e.g. loss.py. (ii) and (iii) are not explicitly separated in the code as they are done in the optimizer level.

Thank you very much for your reply! So the Loss_G is the latest loss value from Driscriminator decision on the G(z) just before be used for optimizing the Generator, right? In other words, if I wanna play around with the Generator's penalty, I only need to add values to Loss_G , right?

Many Thanks!

zsyzzsoft commented 2 years ago

Yes, you can understand like that.

Mshz2 commented 2 years ago

Yes, you can understand like that.

Thanks again for your reply. One last question, I was trying to find how can I get to the Loss_G value at the training loop for every training step which leads to network update, like this in regular pytorch: loss.backward() optimizer.step

However, I really tried but got lost in the codes. would you please tell me where can I find this Loss_G value at each training step at the place where the training loop for each batch are happening? maybe one line just where it is obtained and backwarded.

I really appeaciate your efforts!

zsyzzsoft commented 2 years ago

This is not easy in TF1. You can change this line to e.g. _, G_loss_val = tflib.run([G_train_op, G_loss], feed_dict). Alternatively, you can see G_loss values using tensorboard with the generated summary file in your experiment dir.

Mshz2 commented 2 years ago

This is not easy in TF1. You can change this line to e.g. _, G_loss_val = tflib.run([G_train_op, G_loss], feed_dict). Alternatively, you can see G_loss values using tensorboard with the generated summary file in your experiment dir.

Thanks. I think the G_loss_val at this line is after the G already is updated by the optimizer, right?

If I wanna have this G_loss value in a loop before the G_train_op is updated, so I can modify it, in which line could I find it? Is it maybe this line?

I am afraid summary file is when thinks are done and I can not change the values of G_loss for the next iteration of G net update.

Highly appreciate your replys!

zsyzzsoft commented 2 years ago

There are not general ways to do this in TF1. You have to use TF operators or wrap your custom function using TF's numpy function. You may wish to learn more about the the TF graph mode from TF documents.

Mshz2 commented 2 years ago

Then, can I instead find the mentioned loss within the diffaugment-stylegan2-pytorch in your repo? I guess since eveything, even the stylegan2 backbone, is written in pytorch. For e.g, maybe in this line of loss.py for pytorch version ?

Can I do?: loss_Gmain = torch.nn.functional.softplus(-gen_logits) # -log(sigmoid(gen_logits)) loss_Gmain = X(a variable) + loss_Gmain training_stats.report('Loss/G/loss', loss_Gmain)

zsyzzsoft commented 2 years ago

Yes you can try it.