Open ZhuYQi opened 5 years ago
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
fig = plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
fig.savefig('loss.png')
plt.show()
@ZhuYQi : Just add this code at the end of the model.fit_generator
part! That should do the trick and save your loss curve to loss.png
.
loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) fig = plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and Validation loss') plt.legend() fig.savefig('loss.png') plt.show()
@ZhuYQi : Just add this code at the end of the
model.fit_generator
part! That should do the trick and save your loss curve toloss.png
.
Thanks a lot. This can work well. But we want to plot the xy_loss, wh_loss, confidence_loss, class_loss. How should we do it. @sirius0503
xy_loss, wh_loss, confidence_loss, haven't been calculated here, there is only yolo_loss
, if you somehow calculate all these metrics by making explicit functions using y_true
and y_pred
tensors as argument, then by adding in model.compile
, metrics = [xy_loss, wh_loss, confidence_loss,...]
, you can similar to the above code, plot these metrics too!
For example:
def calculate_xy_loss(y_true, y_pred):
...
return xy_loss
model.compile(optimizer=Adam(lr=1e-3),
loss = 'yolo_loss', metrics=[xy_loss])
"""
See the history.history dictionary as to what name has been assigned to your
particular loss
"""
xy_loss = history.history[xy_'loss']
xy_val_loss = history.history['xy_val_loss']
epochs = range(1, len(loss) + 1)
fig = plt.figure()
plt.plot(epochs, xy_loss, 'bo', label='Training loss')
plt.plot(epochs, xy_val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
fig.savefig('loss.png')
plt.show()
xy_loss, wh_loss, confidence_loss, haven't been calculated here, there is only
yolo_loss
, if you ... plt.show()
In doing so, it seems that only one loss curve can be plotted for each training session. @sirius0503
@ZhuYQi : How do you want it to be?
How do you want it to be?
After a complete training, plot five loss curves in one figure, including xy_loss, wh_loss, etc. @sirius0503
@ZhuYQi Try using something like this: How to plot multiple functions on the same figure, in Matplotlib? or go with matplotlib subplots
@ZhuYQi Try using something like this: How to plot multiple functions on the same figure, in Matplotlib? or go with
matplotlib subplots
@sirius0503 Sorry, it is our wrong expression. Now, the current code is only able to return the total loss and plot its curve in figure, but how to get all the loss (such as: xy_loss, etc.) and plot its curve after a complete training.
xy_loss, wh_loss, confidence_loss, haven't been calculated here, there is only
yolo_loss
, if you somehow calculate all these metrics by making explicit functions usingy_true
andy_pred
tensors as argument, then by adding inmodel.compile
,metrics = [xy_loss, wh_loss, confidence_loss,...]
, you can similar to the above code, plot these metrics too!For example:
def calculate_xy_loss(y_true, y_pred): ... return xy_loss model.compile(optimizer=Adam(lr=1e-3), loss = 'yolo_loss', metrics=[xy_loss])
""" See the history.history dictionary as to what name has been assigned to your particular loss """ xy_loss = history.history[xy_'loss'] xy_val_loss = history.history['xy_val_loss'] epochs = range(1, len(loss) + 1) fig = plt.figure() plt.plot(epochs, xy_loss, 'bo', label='Training loss') plt.plot(epochs, xy_val_loss, 'b', label='Validation loss') plt.title('Training and Validation loss') plt.legend() fig.savefig('loss.png') plt.show()
As you can see above, you'll have to make a keras metric function
with the body
, as I've shown above, and then include it into model.compile
through the metrics
keyword argument, then you'll get your xy_loss, etc
as part of the history.history
object . As shown here Accuracy and Loss visualization after training in keras, which you can then visualize with ease!
@sirius0503 We still can't fully understand the method you provide. If it is convenient, can you provide a small case, our email address is fzu2010@163.com. Thanks!
@ZhuYQi : I am talking about a custom keras metric such as you can find in the keras documentation below: Custom Keras metric
@ZhuYQi Could you please add my QQ number? I want to ask a question. Thank you.QQ:2803788198
I cannot find the dictionary 'history.history' , where was I wrong? can you tell me?@https://github.com/sirius0503 I have tried "history = model.fit()",but wrong.
I cannot find the dictionary 'history.history' , where was I wrong? can you tell me?@https://github.com/sirius0503 I have tried "history = model.fit()",but wrong.
@litterbearly : I'd need you to show me your code as to give you a better response, but if you're getting NameError
with history
object, then I think maybe you used a function for training the network in which there is a call to history = model.fit()
, you have to remember that for the function main
I suppose (the training function name), history is a local variable, so if you're trying to get the history
object it will not be available to another cell in jupyter-notebook
, you will have to return the variable using something like return history
or you can go with doing all the history
object analysis related to loss
and accuracy
in the function just like I did and save the loss
and accuracy
curves.
Something like this:
def main() : # Training function
...
history= model.fit()
...
return history
myhistory = main()
Or Go with saving the figures and loss values in the function itself, maybe even in a global variable, if possible.
Hi there, How do we plot the loss curves for each part (such as xy_loss, etc.)? @qqwweee