cavalab / srbench

A living benchmark framework for symbolic regression
https://cavalab.org/srbench/
GNU General Public License v3.0
203 stars 74 forks source link

PS-Tree Competition 2022 #97

Closed hengzhe-zhang closed 2 years ago

hengzhe-zhang commented 2 years ago

Hello, this is the code for participating GECCO competition. Thanks for providing such an opportunity to compare different symbolic regression methods.

lacava commented 2 years ago

HI @hengzhe-zhang , please put your code in the submission folder, not competitor folder.

hengzhe-zhang commented 2 years ago

@lacava Hi! I have moved all the files to the submission folder. So sorry for the late response, I am busy these days.

hengzhe-zhang commented 2 years ago

@lacava Hi! It seems the CI failure was due to the existence of an error in Taylor GP. Should I do anything to solve it?

lacava commented 2 years ago

@hengzhe-zhang stand by, make some changes.

lacava commented 2 years ago

hi @hengzhe-zhang I've made some updates to the competition branch to fix issues with conflicting submissions.

to fix this PR, please do the following:

  1. merge changes to your fork from cavalab:Competition2022 (here's a guide: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork)
  2. push the changes to henghzhe-zhang:Competition2022 and we should be good to go.

methods that have passed the checks are now moved to the official_competitors folder after submission. only the submission/ folder should be changed in your PR.

sorry for the inconvenience, and thanks again for your submission.

lacava commented 2 years ago

Hi @hengzhe-zhang , I just pushed a fix to evaluate_model when test_params is undefined. You can either rebase your PR to upstream Competition2022, or add this to regressor.py:

# define eval_kwargs
eval_kwargs = dict(
    test_params={}
)
hengzhe-zhang commented 2 years ago

@lacava Thanks for your help! Everything seems to go well now.