Closed wilsonyhlee closed 3 years ago
Oh my, I had no idea you were working on this. This looks really good to me, but just in case, here is what I was writing up.
Everyone is welcome to contribute, and we value contributions from the community.
Below are some steps to get you started with general contributions to this repository.
Fork this repository.
Clone your fork to your local working environment.
git@github.com:<your Github handle>/evaluation.git
Make modifications and commit any changes. It's best to make your commit messages informative to help your reviewer. Below is a few list of meta-labels to get you started.
# feature (new feature)
# fix (bug fix)
# refactor (refactoring production code)
# style (formatting, missing semi colons, etc; no code change)
# docs (changes to documentation)
# test (adding or refactoring tests; no production code change)
# chore (updating grunt tasks etc; no production code change)
# version (version bump/new release; no production code change)
# debug (Changes in debugging code/frameworks; no production code change)
# license (Edits regarding licensing; no production code change)
# hack (Temporary fix to make things move forward; please avoid)
For example, one possible commit message would be feature: implement lambada evaluation
.
Open a pull request in the upstream repository.
Communicate with the reviewer to address any feedback and comments until the final merge.
The style of this repository is dictated by [black
]() and [isort
](). In your virtual environment, run
pip install -r requirements-dev.txt
In the root directory, run
make quality
To check for listing and code styling issues.
Similarly, run
make style
to actually style and reformat your code. Since the GitHub workflows CI checks for style, you must run these commands prior to a PR.
This repository contains a set of template boilerplate codes to help you get started with adding benchmarks and datasets. These boilerplate codes reside in evaluation/tasks/template
. In the root directory, run
cp -r evaluation/tasks/template evaluation/tasks/{{SOME_NEW_TASK}}
to copy the boilerplate code to a new directory.
The template directory contains three files:
template.py
: This is the main Python file that will require modification. Please rename this file to the task you are implementing, e.g. lambada.py
. english.json
and multilingual.json
are configuration files for English and multilingual benchmarks, respectively. You can add key value pairs necessary for the evaluation run in these files. The english_only
flag will determine which files are read. If you have any questions or concerns, feel free to open and issue. Alternatively, feel free to discuss these topics on Slack. We value any and all forms of constructive feedback.
Just a minor FYI:
Angular has split chore
into build
and ci
, in case you guys will need the separated types.
Just a minor FYI: Angular has split
chore
intobuild
andci
, in case you guys will need the separated types.
@tianjianjiang I wasn't aware. Just added. Thanks!
I built on top of the branch @jaketae had worked on