Open 12neurotool opened 5 years ago
At each iteration, batch_size
windows are extracted from the queue, they're fed to the network and the weights are updated using the gradient of the loss function with respect to the weights.
The concept of epoch doesn't apply in this case, because NiftyNet uses patch-based training (patch and window are the same thing).
I am not clear about what the iteration means. Could you explain more about it? Also, how is it related to epoch?
Thanks.