1 - Optimizer other than GradientDescentOptimizer will generate a ValueError. Can be solved by adding following line before optimizing. with tf.variable_scope(tf.get_variable_scope(), reuse=False):. There might be other solutions for this.
2 - Loss function takes named parameters. i.e sigmoid_cross_entropy_with_logits(logits=Dx, labels=tf.ones_like(Dx)
1 - Optimizer other than GradientDescentOptimizer will generate a ValueError. Can be solved by adding following line before optimizing.
with tf.variable_scope(tf.get_variable_scope(), reuse=False):
. There might be other solutions for this.2 - Loss function takes named parameters. i.e
sigmoid_cross_entropy_with_logits(logits=Dx, labels=tf.ones_like(Dx)