cavalab / srbench

A living benchmark framework for symbolic regression
https://cavalab.org/srbench/
GNU General Public License v3.0
203 stars 74 forks source link

Missing model_size #171

Open matigekunstintelligentie opened 4 months ago

matigekunstintelligentie commented 4 months ago

When I run the following command in the 'experiment' folder python evaluate_model.py ../../pmlb/datasets/503_wind/503_wind.tsv.gz -ml myalg -results_path ../results_blackbox/503_wind/ -seed 16850 -target_noise 0.0 -feature_noise 0.0 I except a .json with a field called 'model_size' instead I do get a 'simplicity' field.

After collating these results, they are incompatible with the blackbox_results.ipynb script because all values in the model_size field are NaN

matigekunstintelligentie commented 4 months ago

It can be fixed by adding:

from metrics.evaluation import simplicity, complexity @l23 results['model_size'] = complexity(results['symbolic_model'], feature_names) @l223 in experiments/evaluate_model.py

And: `def complexity(pred_model, feature_names): local_dict = {f:sp.Symbol(f) for f in feature_names} sp_model = get_symbolic_model(pred_model, local_dict)

compute num. components

num_components = 0
for _ in sp.preorder_traversal(sp_model):
    num_components += 1
return num_components

` @l68 in experiment/metrics/evaluation.py

lacava commented 4 months ago

what version are you running?

matigekunstintelligentie commented 4 months ago

@lacava I believe I pulled the latest version. The blackbox_results.ipynb hasn't changed and I can't find 'model_size' anywhere if I search it here on the github

lacava commented 4 months ago

got it. your best bet if you want to reproduce the paper results is to use v2.0 release. the postprocessing notebooks have not been updated since we made changes to evaluation based on the competition. in v2.0 evaluate_model.py stores model_size here