cavalab / srbench

A living benchmark framework for symbolic regression
https://cavalab.org/srbench/
GNU General Public License v3.0
203 stars 74 forks source link

add E2ET method #90

Closed pakamienny closed 2 years ago

pakamienny commented 2 years ago

Competition Checklist:

I have verified that:

Refer to the competition guide if you are unsure about any steps. If you don't find an answer, ping us!

pakamienny commented 2 years ago

Hi ! this is my first try to submit for the SRBench's competition. I tried following William's answer here https://github.com/cavalab/srbench/discussions/81#discussioncomment-2546257 on how to get the submission running. Running directly test_evaluate on our method (E2ET) goes perfect, however there is an issue when importing stuff using pytest (which seems to be a common issue on stackoverflow, though I could not really get around it). Any idea?

What about running with CI (which I am not familiar at all with). what is the associated command?

Please let me know how I can improve my PR :) thanks!

lacava commented 2 years ago

hey @pakamienny , great, thanks for your submission! The install and tests are running and can be checked above, e.g.: https://github.com/cavalab/srbench/runs/5996899965?check_suite_focus=true

lacava commented 2 years ago

Looks like there's an issue with pip finding the right torch version:

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement torch==1.11.0+cu113 (from versions: 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0)
ERROR: No matching distribution found for torch==1.11.0+cu113
pakamienny commented 2 years ago

Hi @lacava, it looks like we finally passed the test, but the CI somehow fails when adding us as a competitor :) do you know whether this comes from us? Thanks!

lacava commented 2 years ago

hi @pakamienny , great. Yes, it looks like the error is on our end.. just a bad commit ref I think. bear with us, we'll get it fixed!

pakamienny commented 2 years ago

perfect, thanks, please let me know when this is solved if I have to do something for the submission.

lacava commented 2 years ago

this is the issue: https://github.com/github/docs/issues/15319

I think I fixed it; @pakamienny could you merge with the upstream commits on Competition2022?

EDIT: i'm going to try a merge commit and see what happens :)

pakamienny commented 2 years ago

this is the issue: github/docs#15319

I think I fixed it; @pakamienny could you merge with the upstream commits on Competition2022?

EDIT: i'm going to try a merge commit and see what happens :)

Looks like it worked @lacava. This mean we are official competitors right? Just to be sure, we can still update the model until end of competition, via PR?

lacava commented 2 years ago

@pakamienny yes! thanks for your submission, you are official competitors. I will be in touch if we run into any issues.

And yes, you are free to update your submission until the submission deadline. Just submit a PR that changes your submission folder. The tests are triggered by changes in that path.

Note: you can totally remove eval_kwargs and the pretrain function if you aren't using them.