cavalab / srbench

A living benchmark framework for symbolic regression
https://cavalab.org/srbench/
GNU General Public License v3.0
203 stars 74 forks source link

PS-Tree #74

Closed hengzhe-zhang closed 2 years ago

hengzhe-zhang commented 2 years ago

Hello! I spent one day changing my algorithm package so that it can output a sympy-compatible model now. The modification is made on the open-source package, so there is no need for modification here. However, because the new requirements require me to push the pull request to the "dev" branch, I open this new pull request and hope that this version of the package will be able to satisfy the SRBench requirement.

lacava commented 2 years ago

hi @Hengzhe-Zhang , sounds good, thanks for rebasing. I'm trying to figure out why the CI tests have not been triggered by this pull request.

hengzhe-zhang commented 2 years ago

Hello! @lacava I have fixed a bug about printing the symbolic model and the CI works fine now.

hengzhe-zhang commented 2 years ago

@lacava Thanks! If I have more than one chance to submit my algorithm, I think I don't need to tune PS-Tree this time. Based on my own experience, the default parameter of PS-Tree works well.

lacava commented 2 years ago

@lacava Thanks! If I have more than one chance to submit my algorithm, I think I don't need to tune PS-Tree this time. Based on my own experience, the default parameter of PS-Tree works well.

I'm not planning to run PSTree multiple times after conducting the experiment - it is very long. So, if you want to do any tuning you should add them now. If not I will go with default parameters.

hengzhe-zhang commented 2 years ago

@lacava Thanks! I understand your concern. According to my current experience, PS-Tree can match Operon's speed if we run all experiments with the default hyper-parameter. I only used a 96-core Huawei server for my experiment, and I was able to complete all of them in a single night. If that really bothers you, I would be happy to provide a parameter grid, but it will take some time.

lacava commented 2 years ago

hi @hengzhe-zhang I think to do an apples-to-apples time comparison we need 6 tuneable settings (see issue #24 for discussion). Could also consider reporting training instance time within CV, but that is not measured currently.

lacava commented 2 years ago

@hengzhe-zhang I'm going to merge this and open an issue about the parameters.