Closed MADONOKOUKI closed 5 years ago
Thanks for bringing this up, MADONOKOUKI. Can I ask what the value of batch_size is?
@schien1729 Thank you for replying. batch size is 256
It seems that the code thinks that the loss you're passing (the variable cost
) has length 1, and therefore can't split it into two microbatches. cost
should be a vector of length 256 (your batch size). Is it possible that you've turned it into a scalar, perhaps by using a reduce function?
I took mean squared error due to minimizing the error between input and output. So I wrote
ae_net = Autoencoder(inputDim, l2scale, compressDims, aeActivation, decompressDims,
dataType) # autoencoder network
clipnorm = 5.0
standard_deviation = 0.0001
# tf Graph input (only pictures)
X = tf.placeholder("float", [None, inputDim])
# Construct model
loss, latent, output = ae_net(X)
print(output.shape)
# Prediction
y_pred = output
# Targets (Labels) are the input data.
y_true = X
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
print(y_pred)
print(y_true)
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred,
)
# Calculate loss as a vector (to support microbatches in DP-SGD).
#vector_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
#labels=y_true, logits=y_pred)
#cost = tf.reduce_mean(vector_loss)
# optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) # in medgan
# Use DP version of GradientDescentOptimizer. For illustration purposes,
# we do that here by calling optimizer_from_args() explicitly, though DP
# versions of standard optimizers are available in dp_optimizer.
optimizer = dp_optimizer.DPGradientDescentGaussianOptimizer(
l2_norm_clip=1.0,
noise_multiplier=1.1,
num_microbatches=256,
learning_rate=.15,
population_size=60000).minimize(cost)
But is this cost function wrong? In this time, which cost function in tensorflow should I use?
The issue might be that by default, tf.losses.mean_squared_error
aggregates the losses over all the examples it's given. Can you try this instead?
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred, reduction=Reduction.NONE
)
Thanks!!!!
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred, reduction="none"
)
I can train my code by this function. I can`t import Reduction, So I directlt write "None"
Hi, I'm having this same error on the MNIST Keras example shipped with the package
ValueError: Dimension size must be evenly divisible by 250 but is 1 for 'training/TFOptimizer/Reshape' (op: 'Reshape') with input shapes: [], [2] and with input tensors computed as partial shapes: input[1] = [250,?].
but only in Tensorflow 1.13, in 1.12 it works fine. Any ideas? Thanks!
Have you made the one liner modification documented at the top of the MNIST keras example ?
Sorry, completely missed it :) I just copied the relevant code somewhere else, so I ended up not looking at the docstring.
So I guess this is just #21, never mind my comment!
I also met the same problem How to solve this problem, please
I implemented the other neural network model and take loss function by dp_optimizer.DPGradientDescentGaussianOptimizer.
In that time, I successed when the num_microbatch is 1. But when the num_microbatch is over 1, I got an error.
I implement like