This PR implements the early-stopping algorithm as described in the corresponding evidence accumulation task implemented in TensorFlow. The only difference is that here the early-stopping criterion is evaluated after each validation step ( e.g., every ten iterations) and not after every iteration as in the TensorFlow implementation. Since the early-stopping is assessed in the first instance with the newest validation value, evaluating the early-stopping for ten iterations with the same validation result as in the TensorFlow implementation seems wasteful.
[x] fix application of wrong learning rate in the case of lazy synapses
This PR implements the early-stopping algorithm as described in the corresponding evidence accumulation task implemented in TensorFlow. The only difference is that here the early-stopping criterion is evaluated after each validation step ( e.g., every ten iterations) and not after every iteration as in the TensorFlow implementation. Since the early-stopping is assessed in the first instance with the newest validation value, evaluating the early-stopping for ten iterations with the same validation result as in the TensorFlow implementation seems wasteful.