Closed kristosh closed 6 years ago
I am trying to understand the loss function of discriminator:
`def discriminator_loss(y_true,y_pred): return K.mean(K.binary_crossentropy(K.flatten(y_pred), K.concatenate([K.ones_like(K.flatten(y_pred[:BATCH_SIZE,:,:,:])),K.zeros_like(K.flatten(y_pred[:BATCH_SIZE,:,: :])) ]) ), axis=-1)`
I am wandering why the y_true is not used at all and only y_pred is used twice. Is it kind of mistake? They way that the discriminator is trained is:
`# Training D: real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),axis=1) fake_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1) X = np.concatenate((real_pairs, fake_pairs)) y = np.concatenate((np.ones((BATCH_SIZE, 1, 64, 64)), np.zeros((BATCH_SIZE, 1, 64, 64)))) d_loss = discriminator.train_on_batch(X, y)`
Is it that I need to add also the y_true in the discriminator or no?
I am trying to understand the loss function of discriminator:
I am wandering why the y_true is not used at all and only y_pred is used twice. Is it kind of mistake? They way that the discriminator is trained is:
Is it that I need to add also the y_true in the discriminator or no?