lemmaa / journal

0 stars 0 forks source link

SGD, batch size, mini-batch, iterations and epoch #1

Open lemmaa opened 6 years ago

lemmaa commented 6 years ago

What are the meanings of batch size, mini-batch, iterations and epoch in neural networks?

Gradient descent is an iterative algorithm which computes the gradient of a function and uses it to update the parameters of the function in order to find a maximum or minimum value of the function. In case of Neural Networks, the function to be optimized (minimized) is the loss function, and the parameters are the weights and biases in the network.

Number of iterations (n): The number of times the gradient is estimated and the parameters of the neural network are updated using a batch of training instances. The batch size B is the number of training instances used in one iteration.

When the total number of training instances (N) is large, a small number of training instances (B<<N) which constitute a mini-batch can be used in one iteration to estimate the gradient of the loss function and update the parameters of the neural network.

It takes n (=N/B) iterations to use the entire training data once. This constitutes an epoch. So, the total number of times the parameters get updated is (N/B)*E, where E is the number of epochs.

Three modes of gradient descent:

Batch mode: N=B, one epoch is same as one iteration.

https://www.quora.com/What-are-the-meanings-of-batch-size-mini-batch-iterations-and-epoch-in-neural-networks

lemmaa commented 6 years ago

Intuitively, how does batch size impact a convolutional network training?

Two facts: time efficiency of training and the noisiness of the gradient estimate.

Updating the parameters using all training data is not efficient. You can update the parameters several times if you only use part of the whole data. On the other hand, updating by one single sample (online updating) is noisy if the sample is not a good representation of the whole data. You can consider a mini-batch to ba an approximation of the whole database. Usually a size of 64, 128, or 256 is used.

https://www.quora.com/Intuitively-how-does-batch-size-impact-a-convolutional-network-training

lemmaa commented 6 years ago

The batch size is the number of training samples your training will use in order to make one update to the model parameters. Ideally you would use all the training samples to calculate the gradients for every single update, however that is not efficient. The batch size simply put, will simplify the process of updating the parameters.

The plot I am attaching; shows the effect of the batch size on the validation accuracy of the model. One can easily see that the batch size, which contributes heavily in determining the learning parameters, will affect the prediction accuracy.

screen shot 2017-10-26 at 10 27 51 am