This is the result of my master thesis on Multi-Objective Optimization. This repository is more focused towards Pareto-based optimization rather than SIngle-Objective optimization with preference articulation. We focus on time-consuming optimization routines and, as a result we focus on model-based methods to allow for faster convergence times. This is relevant for Architectural Design Optimization, which depends on time-intensive simulations (e.g. minutes, hours or even days to complete a single simulation).
Must change training-based early stopping to accumulate the batch loss, instead of the loss per sample... The currently existing implementation will work, but training and "validation" options will be different when validation_fraction = 0. Ideally, this should not happen. The difference is that MLPRegressor keeps track of the loss per batch, whereas EarlyStopping maintains the track of the loss per sample.
Must change training-based early stopping to accumulate the batch loss, instead of the loss per sample... The currently existing implementation will work, but training and "validation" options will be different when validation_fraction = 0. Ideally, this should not happen. The difference is that MLPRegressor keeps track of the loss per batch, whereas EarlyStopping maintains the track of the loss per sample.