UBCDingXin / improved_CcGAN

Continuous Conditional Generative Adversarial Networks (CcGAN)
https://arxiv.org/abs/2011.07466
MIT License
115 stars 34 forks source link

Dispence one-hot-vectors #3

Closed jeiglsperger closed 3 years ago

jeiglsperger commented 3 years ago

Hi!

I still don't quite understand how it is possible to dispense on transforming y (or the condition) into one-hot-vectors, as the conditions still go through some layers as a scalar after being incorporated to D or G.

Kind regards!

UBCDingXin commented 3 years ago

@Zepp3 It is usually impossible to transform a scalar continuous condition y into a one-hot vector because there are infinite distinct values that y can take. Please check Section 2.2 of our paper at https://arxiv.org/pdf/2011.07466.pdf, where two label input mechanisms are provided. In the improved label input mechanism, an embedding network is learned to map a scalar continuous y into a high dimensional space and this high dimensional representation of y is fed into D and G.

jeiglsperger commented 3 years ago

Thanks! So I can assume that the high dimensional representation does not damage the relationship of the labels. Do you think that in general CcGAN is applicable to synthesize tabular data, not only image generation?

UBCDingXin commented 3 years ago

@Zepp3

Please note that high dimensional representation is only used to input regression labels into D and G but the construction of a hard/soft vicinity for y is conducted in the original scale of y.

If tabular data does not have discrete variables, CcGAN may apply.

jeiglsperger commented 3 years ago

@UBCDingXin

Why wouldn't discrete variables in the data be suitable, as long as the labels are continuous?

UBCDingXin commented 3 years ago

@Zepp3

CcGAN assumes the conditional image distribution is continuous.

jeiglsperger commented 3 years ago

Thanks!