Closed rohitrawat closed 7 years ago
On looking at keras/engine/training.py#L1483, 'batch_size'
is not included in params
when it is built by fit_generator()
callbacks.set_params({
'nb_epoch': nb_epoch,
'nb_sample': samples_per_epoch,
'verbose': verbose,
'do_validation': do_validation,
'metrics': callback_metrics,
})
On the other hand, _fit_loop()
which is called by fit()
(keras/engine/training.py#L855) does define the 'batch_size'
.
callbacks.set_params({
'batch_size': batch_size,
'nb_epoch': nb_epoch,
'nb_sample': nb_train_sample,
'verbose': verbose,
'do_validation': do_validation,
'metrics': callback_metrics or [],
})
It turns out that fit_generator()
does not know the batch size as it is determined by however many samples are enqueued by the generator
. To fix this, we can count samples instead of batches. The Callback documentation says that number of samples in the current batch is passed as logs['size']
. I have written a fix in my fork.
Unlike the
fit()
method,fit_generator()
does not have a 'batch_size' parameter defined. This produces aKeyError: 'batch_size'
error on line 81 of tqdm_callback.py:self.batch_count = int(ceil(self.params['nb_sample'] / self.params['batch_size']))
fit_generator()
is used instead offit()
when reading image directories and augmenting images on the fly.Here is a minimum working example that works when not using '
, verbose=0, callbacks=[TQDMCallback()]
', but immediately fails when used with it: