PastelBelem8 / ADOPT.jl

This is the result of my master thesis on Multi-Objective Optimization. This repository is more focused towards Pareto-based optimization rather than SIngle-Objective optimization with preference articulation. We focus on time-consuming optimization routines and, as a result we focus on model-based methods to allow for faster convergence times. This is relevant for Architectural Design Optimization, which depends on time-intensive simulations (e.g. minutes, hours or even days to complete a single simulation).
GNU General Public License v3.0
0 stars 0 forks source link

MLPRegressor - Early Stopping #19

Open PastelBelem8 opened 5 years ago

PastelBelem8 commented 5 years ago

Must change training-based early stopping to accumulate the batch loss, instead of the loss per sample... The currently existing implementation will work, but training and "validation" options will be different when validation_fraction = 0. Ideally, this should not happen. The difference is that MLPRegressor keeps track of the loss per batch, whereas EarlyStopping maintains the track of the loss per sample.