Closed pakamienny closed 2 years ago
Hi ! this is my first try to submit for the SRBench's competition. I tried following William's answer here https://github.com/cavalab/srbench/discussions/81#discussioncomment-2546257 on how to get the submission running. Running directly test_evaluate on our method (E2ET) goes perfect, however there is an issue when importing stuff using pytest (which seems to be a common issue on stackoverflow, though I could not really get around it). Any idea?
What about running with CI (which I am not familiar at all with). what is the associated command?
Please let me know how I can improve my PR :) thanks!
hey @pakamienny , great, thanks for your submission! The install and tests are running and can be checked above, e.g.: https://github.com/cavalab/srbench/runs/5996899965?check_suite_focus=true
Looks like there's an issue with pip finding the right torch version:
Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement torch==1.11.0+cu113 (from versions: 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0)
ERROR: No matching distribution found for torch==1.11.0+cu113
Hi @lacava, it looks like we finally passed the test, but the CI somehow fails when adding us as a competitor :) do you know whether this comes from us? Thanks!
hi @pakamienny , great. Yes, it looks like the error is on our end.. just a bad commit ref I think. bear with us, we'll get it fixed!
perfect, thanks, please let me know when this is solved if I have to do something for the submission.
this is the issue: https://github.com/github/docs/issues/15319
I think I fixed it; @pakamienny could you merge with the upstream commits on Competition2022?
EDIT: i'm going to try a merge commit and see what happens :)
this is the issue: github/docs#15319
I think I fixed it; @pakamienny could you merge with the upstream commits on Competition2022?
EDIT: i'm going to try a merge commit and see what happens :)
Looks like it worked @lacava. This mean we are official competitors right? Just to be sure, we can still update the model until end of competition, via PR?
@pakamienny yes! thanks for your submission, you are official competitors. I will be in touch if we run into any issues.
And yes, you are free to update your submission until the submission deadline. Just submit a PR that changes your submission folder. The tests are triggered by changes in that path.
Note: you can totally remove eval_kwargs
and the pretrain function if you aren't using them.
Competition Checklist:
submission/
with a meaningful name corresponding to your method name.The added folder includes these elements:
[x]
metadata.yml
(required): A file describing your submission, following the descriptions inexample/metadata.yml
.[x]
regressor.py
(required): a Python file that defines your method, named appropriately. See submission/feat-example/regressor.py for complete documentation. It contains:est
: a sklearn-compatibleRegressor
object.model(est, X=None)
: a function that returns a sympy-compatible string specifying the final model. It can optionally take the training data as an input argument. See guidance below.eval_kwargs
(optional): a dictionary that can specify method-specific arguments toevaluate_model.py
.[ ]
LICENSE
(optional) A license file[ ]
environment.yml
(optional): a conda environment file that specifies dependencies for your submission.[ ]
install.sh
(optional): a bash script that installs your method.[ ] additional files (optional): you may include a folder containing the code for your method in the submission.
I have verified that:
install.sh
shouldn't pulll a different version of the code when run multiple times.)Refer to the competition guide if you are unsure about any steps. If you don't find an answer, ping us!