Closed tirthasheshpatel closed 3 years ago
I have also printed all the metrics after one epoch of training. Unlike before, where we print only the metrics of last batch
Check and merge @Shrey-Viradiya
can you please add the feature of saving the history of the training in order to plot it afterwords as @tirth-hihoriya said? We'll merge it together
reopen the new pill request for the complete work
I will push the changes on to this branch only
I have done the necessary changes. Please review then now, @Shrey-Viradiya
okay
I think it is good to merge
have you tried running it? No bugs right
I am trying it. Looking at it, it seems everything should be fine
Confirm here to merge the pull request
I am on epoch 13. It will take an hour or two more to train the model fully... We will merge this as soon as the training finishes and the plot shows up
i was thinking of saving the history per epoch as a NumPy array file in the disk rather than returning it after completing the training. It will be helpful if we stop the training from the middle to save time. and plot it manually
i was thinking of saving the history per epoch as a NumPy array file in the disk rather than returning it after completing the training. It will be helpful if we stop the training from the middle to save time. and plot it manually
It will take a lot of time to save the model everytime. We can save the model, say, every 5 epochs or every three epochs. But not every epoch. It is just too expensive
our current code saves the best model till now, with criterion as testing accuracy. what I am suggesting is save all loss and accuracy of all epochs in a NumPy array and save it after every epoch.
our current code saves the best model till now, with criterion as testing accuracy. what I am suggesting is save all loss and accuracy of all epochs in a NumPy array and save it after every epoch.
I am doing the same with lists. Cant use numpy arrays as the size of list will be unknown until runtime. So, we have to use python list. But we can convert it to numpy arrays and store it on disk. Let me try that
our current code saves the best model till now, with criterion as testing accuracy. what I am suggesting is save all loss and accuracy of all epochs in a NumPy array and save it after every epoch.
I am doing the same with lists. Cant use numpy arrays as the size of list will be unknown until runtime. So, we have to use python list. But we can convert it to numpy arrays and store it on disk. Let me try that
That's what I'm saying because if we interrupt the code if we get results quite early, the training method will not return all the list
Done, done, done!!! Tested. Working. Merge! LGTM
I have saved four numpy arrays ==> train_losses, train_accuracies, test_losses, test_accuracies
It is saving everything on google colab. So, we are safe to merge
LGTM? Have to change the code to save the list as numpy objects after every epoch?
LGTM? Have to change the code to save the list as numpy objects after every epoch?
Done. See the diff now
LGTM? Have to change the code to save the list as numpy objects after every epoch?
No need to save after every epoch. It will save when model training ends or user force stops the training.
Merging...
So, I have rerun the model on the test data after an epoch of training so that we get the actual accuracy of the model after training and not during training bathces