Added a placeholder into the SCALED_ERRORS_FILENAME.
When/If matbench moves to support multiple benchmarks,
the current implementation would overwrite the plotly html.
This change creates separate json and html scaled error plots for every benchmark.
Changed generate_general_purpose_leaderboard_and_plot.
The current implementation again overwrites the gp_leaderboard_txt,
if multiple benchmarks are present.
The change instead places all the leaderboards one after the other.
Bug fix in line 524.
return statement was placed inside the for loop before.
Changed all metadata.num_entries into metadata.n_samples.
The default matbench metadata json file contains n_samples for all the tasks.
The num_entries are added later only in separate place.
Using n_samples throughout is better for consistency.
Added utf-8 encoding to all with open statements.
Allows me to run rebuild_docs.py on a windows machine:)
Tests
I have a private version of matbench in which I've added support for another benchmark. I have built the docs locally and now both benchmarks have separate leaderboards that are fully functional.
Core code/data/docs changes
Brief description of changes
Added a placeholder into the SCALED_ERRORS_FILENAME. When/If matbench moves to support multiple benchmarks, the current implementation would overwrite the plotly html. This change creates separate json and html scaled error plots for every benchmark.
Changed generate_general_purpose_leaderboard_and_plot. The current implementation again overwrites the gp_leaderboard_txt, if multiple benchmarks are present. The change instead places all the leaderboards one after the other.
Bug fix in line 524. return statement was placed inside the for loop before.
Changed all metadata.num_entries into metadata.n_samples. The default matbench metadata json file contains n_samples for all the tasks. The num_entries are added later only in separate place. Using n_samples throughout is better for consistency.
Added utf-8 encoding to all with open statements. Allows me to run rebuild_docs.py on a windows machine:)
Tests
I have a private version of matbench in which I've added support for another benchmark. I have built the docs locally and now both benchmarks have separate leaderboards that are fully functional.
Closed issues or PRs
None.
Label the pull request
Should be labeled
docs
.