Closed evabalini closed 2 years ago
It is because 1 epoch might not be enough for the feature selection process to converge. It needs enough time before it can stably converge. This is why, we keep doubling the number of epochs until we observe convergence.
Hello, thank you for your significant contribution. I was wondering regarding the epochs of the model: why if I select let's say 1 epoch in the constructor, in the end in the verbose I see 1, 2, 4, 8 and then 16 epochs trained?