liyunsheng13 / BDL

MIT License
222 stars 30 forks source link

BDL adversarial training #29

Closed Turlan closed 4 years ago

Turlan commented 4 years ago

First of all, it's just a silly question. My understand for "real" and "fake": I found that in your training code, the adversarial loss for output probility appears to be the opposite of the common setting, that would be 1 for "real" data, 0 for "fake" data. In this case, since we have the gt for translated sync data, we need to push unlabelled/pseudo labelled real data to behave like labelled data, that means translated sync data data should be "real" and target data should be "fake". Common setting of adversarial training: When we train the generator(segmentation network), we push the output of discriminator for generator input to "real"(1). When we train the discriminator, the input produced by the generator should be classified as "fake"(0). Your setting: You revese the domain label for the real data and generator(segmentation) output. Although, it may have little influence for the final result, I still want to check if this a personal preference or a deliberate design?

liyunsheng13 commented 4 years ago

I don't think the domain label makes any difference. I just randomly use 0 or 1 for real or fake.