[x] title of this PR is meaningful, i.e. "method X for comp"
[x] A folder has been added to submission/ with a meaningful name corresponding to your method name.
The added folder includes these elements:
[x] metadata.yml (required): A file describing your submission, following the descriptions in example/metadata.yml.
[x] regressor.py (required): a Python file that defines your method, named appropriately. See submission/feat-example/regressor.py for complete documentation. It contains:
[x] est: a sklearn-compatible Regressor object.
[x] model(est, X=None): a function that returns a sympy-compatible string specifying the final model. It can optionally take the training data as an input argument. See guidance below.
[ ] eval_kwargs (optional): a dictionary that can specify method-specific arguments to evaluate_model.py.
[ ] LICENSE(optional) A license file
[x] environment.yml(optional): a conda environment file that specifies dependencies for your submission.
[x] install.sh(optional): a bash script that installs your method.
[ ] additional files (optional): you may include a folder containing the code for your method in the submission.
I have verified that:
[x] install scripts do not require sudo permissions.
[x] if pulled remotely, the source code is a fixed version (i.e., rerunning install.sh shouldn't pulll a different version of the code when run multiple times.)
Refer to the competition guide if you are unsure about any steps.
If you don't find an answer, ping us!
Competition Checklist:
submission/
with a meaningful name corresponding to your method name.The added folder includes these elements:
[x]
metadata.yml
(required): A file describing your submission, following the descriptions inexample/metadata.yml
.[x]
regressor.py
(required): a Python file that defines your method, named appropriately. See submission/feat-example/regressor.py for complete documentation. It contains:est
: a sklearn-compatibleRegressor
object.model(est, X=None)
: a function that returns a sympy-compatible string specifying the final model. It can optionally take the training data as an input argument. See guidance below.eval_kwargs
(optional): a dictionary that can specify method-specific arguments toevaluate_model.py
.[ ]
LICENSE
(optional) A license file[x]
environment.yml
(optional): a conda environment file that specifies dependencies for your submission.[x]
install.sh
(optional): a bash script that installs your method.[ ] additional files (optional): you may include a folder containing the code for your method in the submission.
I have verified that:
install.sh
shouldn't pulll a different version of the code when run multiple times.)Refer to the competition guide if you are unsure about any steps. If you don't find an answer, ping us!