Made the following changes to the DTLZ docs (in the README.md file):
updated usage reflect the usage of GD+ instead of RMSE as a second performance metric;
modified the "failures" option for DTLZ, so that it will not affect the solution set
provided additional documentation under CONFIGURATION on the environment variables that can be set and what they do, plus a statement on the (lack of) meta data for these problems under USAGE
For the JAHSBench problem:
added carefully chosen reference points to the metrics.py file, based on studying the raw validation dataset from the JAHS-Bench-201 repository -- note that these reference points also implicitly determine the lower bounds on the "interesting range" of objective values
added documentation to the README.md file on USAGE and CONFIGURATION
added documentation to the README.md file on the modifications made to the original JAHS-Bench-201 problem and why, our choice of reference points and how it affects the moo_lower_bound, and our performance evaluation tools
Made the following changes to the DTLZ docs (in the
README.md
file):CONFIGURATION
on the environment variables that can be set and what they do, plus a statement on the (lack of) meta data for these problems underUSAGE
For the
JAHSBench
problem:metrics.py
file, based on studying the raw validation dataset from the JAHS-Bench-201 repository -- note that these reference points also implicitly determine the lower bounds on the "interesting range" of objective valuesREADME.md
file onUSAGE
andCONFIGURATION
README.md
file on the modifications made to the original JAHS-Bench-201 problem and why, our choice of reference points and how it affects themoo_lower_bound
, and our performance evaluation tools