pierluigiferrari / ssd_keras

A Keras port of Single Shot MultiBox Detector
Apache License 2.0
1.86k stars 934 forks source link

An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. #363

Open pucha48 opened 3 years ago

pucha48 commented 3 years ago

If you open a GitHub issue, here is the policy:

Your issue must be about one of the following:

  1. a bug,
  2. a feature request,
  3. a documentation issue, or
  4. a question that is specific to this SSD implementation.

You will only get help if you adhere to the following guidelines:

pucha48 commented 3 years ago

While training for single class, I am getting this error. I have used following: n_classes = 2 (1 background + 1 sheep(class COCO id : 19))

Shape of the 'conv4_3_norm_mbox_conf' weights:

kernel: (3, 3, 512, 8) bias: (8,)

Error log:

ValueError Traceback (most recent call last)

in () 10 validation_data=val_generator, 11 validation_steps=ceil(val_dataset_size/batch_size), ---> 12 initial_epoch=initial_epoch) /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs) 89 warnings.warn('Update your `' + object_name + '` call to the ' + 90 'Keras 2 API: ' + signature, stacklevel=2) ---> 91 return func(*args, **kwargs) 92 wrapper._original_function = func 93 return wrapper /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 1416 use_multiprocessing=use_multiprocessing, 1417 shuffle=shuffle, -> 1418 initial_epoch=initial_epoch) 1419 1420 @interfaces.legacy_generator_methods_support /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 38 39 do_validation = bool(validation_data) ---> 40 model._make_train_function() 41 if do_validation: 42 model._make_test_function() /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/engine/training.py in _make_train_function(self) 507 training_updates = self.optimizer.get_updates( 508 params=self._collected_trainable_weights, --> 509 loss=self.total_loss) 510 updates = (self.updates + 511 training_updates + /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs) 89 warnings.warn('Update your `' + object_name + '` call to the ' + 90 'Keras 2 API: ' + signature, stacklevel=2) ---> 91 return func(*args, **kwargs) 92 wrapper._original_function = func 93 return wrapper /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/optimizers.py in get_updates(self, loss, params) 473 @interfaces.legacy_get_updates_support 474 def get_updates(self, loss, params): --> 475 grads = self.get_gradients(loss, params) 476 self.updates = [K.update_add(self.iterations, 1)] 477 /media/antpc/main_drive/anaconda3/envs/mafat/lib/python3.6/site-packages/keras/optimizers.py in get_gradients(self, loss, params) 89 grads = K.gradients(loss, params) 90 if None in grads: ---> 91 raise ValueError('An operation has `None` for gradient. ' 92 'Please make sure that all of your ops have a ' 93 'gradient defined (i.e. are differentiable). ' ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.