openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
720 stars 38 forks source link

[REVIEW]: OpenSkill: A faster asymmetric multi-team, multiplayer rating system #5901

Closed editorialbot closed 9 months ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@vivekjoshy<!--end-author-handle-- (Vivek Joshy) Repository: https://github.com/OpenDebates/openskill.py Branch with paper.md (empty if default branch): main Version: v5.1.0 Editor: !--editor-->@vissarion<!--end-editor-- Reviewers: @Naeemkh, @matt-graham Archive: 10.5281/zenodo.8280051

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/c9c34dc845852b43e0884cf107cfcf84"><img src="https://joss.theoj.org/papers/c9c34dc845852b43e0884cf107cfcf84/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/c9c34dc845852b43e0884cf107cfcf84/status.svg)](https://joss.theoj.org/papers/c9c34dc845852b43e0884cf107cfcf84)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Naeemkh & @matt-graham, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @vissarion know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @Naeemkh

📝 Checklist for @matt-graham

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.11 s (814.8 files/s, 140424.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          28           1737           1647           6565
JSON                             7              0              0            624
Jupyter Notebook                 1              0            213            526
reStructuredText                20           1083           1064            391
Markdown                        10            111              0            318
YAML                             6             16             10            210
TOML                             2             11              0            148
CSS                              1             12              7             71
TeX                              2              1              0             61
INI                              1              5              0             56
DOS Batch                        1              8              1             26
HTML                             3              4              4             20
JavaScript                       1              1              0             16
make                             1              4              7              9
SVG                              3              0              0              6
-------------------------------------------------------------------------------
SUM:                            87           2993           2953           9047
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago

Wordcount for paper.md is 364

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.chb.2023.107828 is OK

MISSING DOIs

- None

INVALID DOIs

- None
Naeemkh commented 1 year ago

Review checklist for @Naeemkh

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Naeemkh commented 1 year ago

@editorialbot generate pdf

Naeemkh commented 1 year ago

Hello @vivekjoshy, I was unable to generate the paper. Could you please take a look? (cc: @vissarion)

vivekjoshy commented 1 year ago

@editorialbot generate pdf

vivekjoshy commented 1 year ago

@editorialbot commands

editorialbot commented 1 year ago

Hello @vivekjoshy, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Generates the pdf paper
@editorialbot generate pdf

# Generates a LaTeX preprint file
@editorialbot generate preprint

# Get a link to the complete list of reviewers
@editorialbot list reviewers
vivekjoshy commented 1 year ago

@editorialbot generate pdf

Naeemkh commented 1 year ago

Hello @vivekjoshy, please add the Open Journals GitHub Action to the package repository, either on the main branch or any other branch? This will help us see if the paper is generated successfully. For your reference, here are the guidelines: JOSS Documentation. Thank you.

vivekjoshy commented 1 year ago

@Naeemkh it's already set to generate on each commit as seen here:

https://github.com/OpenDebates/openskill.py/actions/runs/5966197114 https://github.com/OpenDebates/openskill.py/blob/main/.github/workflows/draft-pdf.yml

Naeemkh commented 1 year ago

@vivekjoshy, great. I will download the paper from your actions.

matt-graham commented 1 year ago

Review checklist for @matt-graham

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

xuanxu commented 1 year ago

When editorialbot is trying to clone the repository this error message is generated:

Error downloading benchmark/data/chess.csv 
Error downloading benchmark/data/overwatch.jsonl
Error downloading benchmark/data/pubg.7z

batch response: This repository is over its data quota. 
Account responsible for LFS bandwidth should purchase more data packs to restore access.
vivekjoshy commented 1 year ago

@xuanxu There should be pointer files for the benchmarks if you use Git LFS. Unfortunately, the repo gets cloned quite a lot and there would be no point in topping up the data quota. I could always just delete the benchmark files or move it out to Kaggle since they're so large. That would also solve the issue.

Naeemkh commented 1 year ago

@xuanxu There should be pointer files for the benchmarks if you use Git LFS. Unfortunately, the repo gets cloned quite a lot and there would be no point in topping up the data quota. I could always just delete the benchmark files or move it out to Kaggle since they're so large. That would also solve the issue.

@vivekjoshy, Please let me know where I can download the benchmark data.

vivekjoshy commented 1 year ago

@Naeemkh I've uploaded the data files for Draw and Win benchmarks here:

https://www.kaggle.com/datasets/daegontaven/overwatch-dataset https://www.kaggle.com/datasets/daegontaven/chess-dataset

It is currently not documented that the benchmark for Large, has an error in it. I've uploaded a corrected version of the benchmarks for the large dataset (pubg) here instead:

https://www.kaggle.com/datasets/daegontaven/processed-pubg

There is also an online notebook that can be viewed for it that uses the corrected dataset (please see version 11 for a correct implementation):

https://www.kaggle.com/code/daegontaven/openskill-benchmarks/notebook?scriptVersionId=142972446

vissarion commented 11 months ago

Hi, @Naeemkh, @matt-graham could you please provide us with an update on the progress of this review?

Naeemkh commented 11 months ago

Hi @vissarion, I am planning on working on this submission this week. Thanks.

matt-graham commented 11 months ago

@vissarion Apologies this dropped off my radar, will also aim to finish going through review checklist this week

matt-graham commented 10 months ago

Just to update as I realised I didn't follow up here last week: my review checklist is now fully up to date in the sense that I've ticked off all items that I believe the current repository and paper satisfy, with the unticked items reflecting things I believe require some action before being satisfied, with details on the changes I think are needed raised in specific issues on the openskill.py repository. @vivekjoshy let me know if you need any clarification on the points I've raised.

vivekjoshy commented 10 months ago

Just to update as I realised I didn't follow up here last week: my review checklist is now fully up to date in the sense that I've ticked off all items that I believe the current repository and paper satisfy, with the unticked items reflecting things I believe require some action before being satisfied, with details on the changes I think are needed raised in specific issues on the openskill.py repository. @vivekjoshy let me know if you need any clarification on the points I've raised.

Thank you for taking the time to suggest these changes. I am unfortunately pre-occupied till the 6th of December with exams. I will get to these issues as soon as I am done with the exams. Thank you once again!

vivekjoshy commented 10 months ago

@editorialbot generate pdf

editorialbot commented 10 months ago

:warning: An error happened when generating the pdf.

vivekjoshy commented 10 months ago

@editorialbot set paper as branch

editorialbot commented 10 months ago

Done! branch is now paper

vivekjoshy commented 10 months ago

@editorialbot generate pdf

editorialbot commented 10 months ago

:warning: An error happened when generating the pdf.

vivekjoshy commented 10 months ago

I removed Git LFS, so it should work, but it's giving an error still.

vissarion commented 10 months ago

@editorialbot generate pdf

editorialbot commented 10 months ago

:warning: An error happened when generating the pdf.

xuanxu commented 10 months ago

The bot is still failing to clone the repository before doing the checkout to the paper branch because the Git LFS files are still present in the main branch.

Error downloading benchmark/data/chess.csv: 
batch response: This repository is over its data quota. 
Account responsible for LFS bandwidth should purchase more data packs to restore access.
vissarion commented 10 months ago

@xuanxu are you reading this from the latest error? https://github.com/openjournals/joss-papers/actions/runs/7192563120/job/19589242658

xuanxu commented 10 months ago

@vissarion I just tried to git clone the repo locally and the command failed with that error message, I guess the bot got the same error.

danielskatz commented 10 months ago

FYI, the error says "Downloading of the repository failed. Please make sure the URL is correct. (Theoj::Error)"

vissarion commented 10 months ago

Exactly, this is a different error.

vissarion commented 10 months ago

Unless the download error is related to benchmark directory size and the CI error message is cryptic.

danielskatz commented 10 months ago

@xuanxu - when I do this locally, it seems to work

% git clone  https://github.com/OpenDebates/openskill.py
Cloning into 'openskill.py'...
remote: Enumerating objects: 3353, done.
remote: Counting objects: 100% (701/701), done.
remote: Compressing objects: 100% (286/286), done.
remote: Total 3353 (delta 459), reused 590 (delta 386), pack-reused 2652
Receiving objects: 100% (3353/3353), 12.86 MiB | 4.86 MiB/s, done.
Resolving deltas: 100% (2197/2197), done.

What am I missing that's failing?

arfon commented 10 months ago
Cloning into 'openskill.py'...
remote: Enumerating objects: 3353, done.
remote: Counting objects: 100% (701/701), done.
remote: Compressing objects: 100% (286/286), done.
remote: Total 3353 (delta 459), reused 590 (delta 386), pack-reused 2652
Receiving objects: 100% (3353/3353), 12.86 MiB | 6.22 MiB/s, done.
Resolving deltas: 100% (2197/2197), done.
git-lfs filter-process: git-lfs: command not found
fatal: the remote end hung up unexpectedly
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

I think this repository might use Git LFS under the hood and for some reason that's breaking the clone process? @vivekjoshy – any ideas what might be going on with your repository?

vivekjoshy commented 10 months ago

@arfon yes it's using git-lfs in the main branch. But it's removed in the unmerged paper branch. Could be that the main branch is being cloned and checkout fails there maybe? I could merge the paper branch once the reviewers get back on the latest changes. Perhaps that will fix the issue. Another option could be I could just remove git-lfs from the main branch and see if that fixes it.

Naeemkh commented 9 months ago

@editorialbot generate pdf

editorialbot commented 9 months ago

:warning: An error happened when generating the pdf.

Naeemkh commented 9 months ago

Thank you, @vivekjoshy, for the updates. I have a few minor suggestions for the paper:

  1. I suggest not including any comments about Microsoft and TrueSkills licenses, as this is outside the scope of JOSS. It's sufficient to mention that your package is open source and publicly available.
  2. In the 'Statement of Need' section, you mention five distinct models but only discuss two. Please ensure that all five models are discussed.
  3. The statement, 'The OpenSkill library furnishes a versatile suite of models and algorithms designed to support a broad spectrum of applications,' is too vague for technical writing. Since you're aware of the exact number of models and algorithms, please include these specific details.
  4. In the 'Related Package' section, add GitHub URLs as references for Elixir, Kotlin, and Lua.

Apologies for the delay in my response. I will need some more time to test the reproducibility and complete my review process.

vivekjoshy commented 9 months ago

@Naeemkh Thank you for the suggestions. I've made the changes as requested. The latest paper is downloadable from here.

matt-graham commented 9 months ago

Apologies for the delay in getting back to this after the holiday period. I've now finished my review checklist based on the changes @vivekjoshy has made in https://github.com/OpenDebates/openskill.py/pull/116 - when that is merged the paper all outstanding issues from my side will have been resolved.

Naeemkh commented 9 months ago

@vivekjoshy, Thank you for the update. Please note that using hyperlinks as references is not considered appropriate in technical documentation. Instead, formal references should be used. Thanks.

For instance (or any other format):

@misc{busby_openskill.ex_2024,
  author = {Busby, Philihp},
  title = {{openskill.ex: Elixir implementation of Weng-Lin Bayesian ranking}},
  year = {2024},
  publisher = {GitHub},
  url = {https://github.com/philihp/openskill.ex},
  commit = {latest},
  note = {Last accessed on January 5, 2024}
}
vivekjoshy commented 9 months ago

@Naeemkh There is no author information for the Lua implementation since only their GitHub username is available. How would I cite them using bibtex? For now, I've left out the author field. Here is the most recent build of the paper.

Naeemkh commented 9 months ago

@vivekjoshy, I believe it can be regarded as a webpage. Here's an example:

@misc{openskill_lua_2024,
  title = {{openskill.lua}},
  year = {2024},
  url = {https://github.com/bstummer/openskill.lua},
  note = {Accessed: 2024-01-06}
}