The make_mlp function appends also to the last layer batch normalization, dropout, and the activation function. This might be problematic, as the output of such function is used also to construct the classifier in the discriminator (self.real_classifier = make_mlp(...)). This might lead to have a final classifier which output is batch normalized and squashed using a (leaky)relu. Is this a desired behaviour?
The
make_mlp
function appends also to the last layer batch normalization, dropout, and the activation function. This might be problematic, as the output of such function is used also to construct the classifier in the discriminator (self.real_classifier = make_mlp(...)
). This might lead to have a final classifier which output is batch normalized and squashed using a (leaky)relu. Is this a desired behaviour?