zhulingchen / tfp-tutorial

TensorFlow Probability Tutorial
36 stars 12 forks source link

Variable <tf.Variable 'conv2d_flipout_3/kernel_posterior_loc:0' shape=(3, 3, 1, 32) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. #1

Open yuxi120407 opened 5 years ago

yuxi120407 commented 5 years ago

Hi, Zhuling: Your codes are very clear and easy to follow, however,when I run your codes in tfb-tutorial, I find that in the Uncertainty CNN part, there are following errors: Model: "model_1"


Layer (type) Output Shape Param #

input_3 (InputLayer) [(None, 28, 28, 1)] 0


conv2d_flipout_3 (Conv2DFlip (None, 14, 14, 32) 609


batch_normalization_2 (Batch (None, 14, 14, 32) 128


activation_2 (Activation) (None, 14, 14, 32) 0


conv2d_flipout_4 (Conv2DFlip (None, 7, 7, 64) 36929


batch_normalization_3 (Batch (None, 7, 7, 64) 256


activation_3 (Activation) (None, 7, 7, 64) 0


flatten_1 (Flatten) (None, 3136) 0


dense_flipout_2 (DenseFlipou (None, 512) 3211777


dense_flipout_3 (DenseFlipou (None, 10) 10251

Total params: 3,259,950 Trainable params: 3,259,754 Non-trainable params: 196


Train on 54000 samples, validate on 6000 samples


ValueError Traceback (most recent call last) ~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/api/_v1/keras/optimizers/init.py in 29 experimental_run_tf_function=False) 30 bcnn_model.summary() ---> 31 hist = bcnn_model.fit(X_train, y_train, batch_size=batch_size, epochs=n_epochs, verbose=1, validation_split=0.1)

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 778 validation_steps=validation_steps, 779 validation_freq=validation_freq, --> 780 steps_name='steps_per_epoch') 781 782 def evaluate(self,

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 155 156 # Get step function and loop type. --> 157 f = _make_execution_function(model, mode) 158 use_steps = is_dataset or steps_per_epoch is not None 159 do_validation = val_inputs is not None

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in _make_execution_function(model, mode) 530 if model._distribution_strategy: 531 return distributed_training_utils._make_execution_function(model, mode) --> 532 return model._make_execution_function(mode) 533 534

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _make_execution_function(self, mode) 2274 def _make_execution_function(self, mode): 2275 if mode == ModeKeys.TRAIN: -> 2276 self._make_train_function() 2277 return self.train_function 2278 if mode == ModeKeys.TEST:

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _make_train_function(self) 2217 # Training updates 2218 updates = self.optimizer.get_updates( -> 2219 params=self._collected_trainable_weights, loss=self.total_loss) 2220 # Unconditional updates 2221 updates += self.get_updates_for(None)

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in get_updates(self, loss, params) 489 490 def get_updates(self, loss, params): --> 491 grads = self.get_gradients(loss, params) 492 grads_and_vars = list(zip(grads, params)) 493 self._assert_valid_dtypes([

~/anaconda3/envs/tensorflow_new/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in get_gradients(self, loss, params) 396 "gradient defined (i.e. are differentiable). " 397 "Common ops without gradient: " --> 398 "K.argmax, K.round, K.eval.".format(param)) 399 if hasattr(self, "clipnorm"): 400 grads = [clip_ops.clip_by_norm(g, self.clipnorm) for g in grads]

ValueError: Variable <tf.Variable 'conv2d_flipout_3/kernel_posterior_loc:0' shape=(3, 3, 1, 32) dtype=float32> has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

Thank you so much and I'm looking forward for your reply. Best, Xi

zhulingchen commented 5 years ago

Hi Xi,

I have encountered the same problem at the very beginning of this journey:

Please take a look at this issue I sent to the TFP team: https://github.com/tensorflow/probability/issues/511

Hope this helps!