Closed editorialbot closed 9 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.11 s (814.8 files/s, 140424.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 28 1737 1647 6565
JSON 7 0 0 624
Jupyter Notebook 1 0 213 526
reStructuredText 20 1083 1064 391
Markdown 10 111 0 318
YAML 6 16 10 210
TOML 2 11 0 148
CSS 1 12 7 71
TeX 2 1 0 61
INI 1 5 0 56
DOS Batch 1 8 1 26
HTML 3 4 4 20
JavaScript 1 1 0 16
make 1 4 7 9
SVG 3 0 0 6
-------------------------------------------------------------------------------
SUM: 87 2993 2953 9047
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 364
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.chb.2023.107828 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
Hello @vivekjoshy, I was unable to generate the paper. Could you please take a look? (cc: @vissarion)
@editorialbot generate pdf
@editorialbot commands
Hello @vivekjoshy, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@editorialbot generate pdf
Hello @vivekjoshy, please add the Open Journals GitHub Action to the package repository, either on the main branch or any other branch? This will help us see if the paper is generated successfully. For your reference, here are the guidelines: JOSS Documentation. Thank you.
@Naeemkh it's already set to generate on each commit as seen here:
https://github.com/OpenDebates/openskill.py/actions/runs/5966197114 https://github.com/OpenDebates/openskill.py/blob/main/.github/workflows/draft-pdf.yml
@vivekjoshy, great. I will download the paper from your actions.
When editorialbot is trying to clone the repository this error message is generated:
Error downloading benchmark/data/chess.csv
Error downloading benchmark/data/overwatch.jsonl
Error downloading benchmark/data/pubg.7z
batch response: This repository is over its data quota.
Account responsible for LFS bandwidth should purchase more data packs to restore access.
@xuanxu There should be pointer files for the benchmarks if you use Git LFS. Unfortunately, the repo gets cloned quite a lot and there would be no point in topping up the data quota. I could always just delete the benchmark files or move it out to Kaggle since they're so large. That would also solve the issue.
@xuanxu There should be pointer files for the benchmarks if you use Git LFS. Unfortunately, the repo gets cloned quite a lot and there would be no point in topping up the data quota. I could always just delete the benchmark files or move it out to Kaggle since they're so large. That would also solve the issue.
@vivekjoshy, Please let me know where I can download the benchmark data.
@Naeemkh I've uploaded the data files for Draw and Win benchmarks here:
https://www.kaggle.com/datasets/daegontaven/overwatch-dataset https://www.kaggle.com/datasets/daegontaven/chess-dataset
It is currently not documented that the benchmark for Large, has an error in it. I've uploaded a corrected version of the benchmarks for the large dataset (pubg) here instead:
https://www.kaggle.com/datasets/daegontaven/processed-pubg
There is also an online notebook that can be viewed for it that uses the corrected dataset (please see version 11 for a correct implementation):
https://www.kaggle.com/code/daegontaven/openskill-benchmarks/notebook?scriptVersionId=142972446
Hi, @Naeemkh, @matt-graham could you please provide us with an update on the progress of this review?
Hi @vissarion, I am planning on working on this submission this week. Thanks.
@vissarion Apologies this dropped off my radar, will also aim to finish going through review checklist this week
Just to update as I realised I didn't follow up here last week: my review checklist is now fully up to date in the sense that I've ticked off all items that I believe the current repository and paper satisfy, with the unticked items reflecting things I believe require some action before being satisfied, with details on the changes I think are needed raised in specific issues on the openskill.py
repository. @vivekjoshy let me know if you need any clarification on the points I've raised.
Just to update as I realised I didn't follow up here last week: my review checklist is now fully up to date in the sense that I've ticked off all items that I believe the current repository and paper satisfy, with the unticked items reflecting things I believe require some action before being satisfied, with details on the changes I think are needed raised in specific issues on the
openskill.py
repository. @vivekjoshy let me know if you need any clarification on the points I've raised.
Thank you for taking the time to suggest these changes. I am unfortunately pre-occupied till the 6th of December with exams. I will get to these issues as soon as I am done with the exams. Thank you once again!
@editorialbot generate pdf
:warning: An error happened when generating the pdf.
@editorialbot set paper as branch
Done! branch is now paper
@editorialbot generate pdf
:warning: An error happened when generating the pdf.
I removed Git LFS, so it should work, but it's giving an error still.
@editorialbot generate pdf
:warning: An error happened when generating the pdf.
The bot is still failing to clone the repository before doing the checkout to the paper
branch because the Git LFS files are still present in the main
branch.
Error downloading benchmark/data/chess.csv:
batch response: This repository is over its data quota.
Account responsible for LFS bandwidth should purchase more data packs to restore access.
@xuanxu are you reading this from the latest error? https://github.com/openjournals/joss-papers/actions/runs/7192563120/job/19589242658
@vissarion I just tried to git clone
the repo locally and the command failed with that error message, I guess the bot got the same error.
FYI, the error says "Downloading of the repository failed. Please make sure the URL is correct. (Theoj::Error)"
Exactly, this is a different error.
Unless the download error is related to benchmark directory size and the CI error message is cryptic.
@xuanxu - when I do this locally, it seems to work
% git clone https://github.com/OpenDebates/openskill.py
Cloning into 'openskill.py'...
remote: Enumerating objects: 3353, done.
remote: Counting objects: 100% (701/701), done.
remote: Compressing objects: 100% (286/286), done.
remote: Total 3353 (delta 459), reused 590 (delta 386), pack-reused 2652
Receiving objects: 100% (3353/3353), 12.86 MiB | 4.86 MiB/s, done.
Resolving deltas: 100% (2197/2197), done.
What am I missing that's failing?
Cloning into 'openskill.py'...
remote: Enumerating objects: 3353, done.
remote: Counting objects: 100% (701/701), done.
remote: Compressing objects: 100% (286/286), done.
remote: Total 3353 (delta 459), reused 590 (delta 386), pack-reused 2652
Receiving objects: 100% (3353/3353), 12.86 MiB | 6.22 MiB/s, done.
Resolving deltas: 100% (2197/2197), done.
git-lfs filter-process: git-lfs: command not found
fatal: the remote end hung up unexpectedly
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
I think this repository might use Git LFS under the hood and for some reason that's breaking the clone process? @vivekjoshy – any ideas what might be going on with your repository?
@arfon yes it's using git-lfs in the main branch. But it's removed in the unmerged paper branch. Could be that the main branch is being cloned and checkout fails there maybe? I could merge the paper branch once the reviewers get back on the latest changes. Perhaps that will fix the issue. Another option could be I could just remove git-lfs from the main branch and see if that fixes it.
@editorialbot generate pdf
:warning: An error happened when generating the pdf.
Thank you, @vivekjoshy, for the updates. I have a few minor suggestions for the paper:
Apologies for the delay in my response. I will need some more time to test the reproducibility and complete my review process.
@Naeemkh Thank you for the suggestions. I've made the changes as requested. The latest paper is downloadable from here.
Apologies for the delay in getting back to this after the holiday period. I've now finished my review checklist based on the changes @vivekjoshy has made in https://github.com/OpenDebates/openskill.py/pull/116 - when that is merged the paper all outstanding issues from my side will have been resolved.
@vivekjoshy, Thank you for the update. Please note that using hyperlinks as references is not considered appropriate in technical documentation. Instead, formal references should be used. Thanks.
For instance (or any other format):
@misc{busby_openskill.ex_2024,
author = {Busby, Philihp},
title = {{openskill.ex: Elixir implementation of Weng-Lin Bayesian ranking}},
year = {2024},
publisher = {GitHub},
url = {https://github.com/philihp/openskill.ex},
commit = {latest},
note = {Last accessed on January 5, 2024}
}
@Naeemkh There is no author information for the Lua implementation since only their GitHub username is available. How would I cite them using bibtex? For now, I've left out the author field. Here is the most recent build of the paper.
@vivekjoshy, I believe it can be regarded as a webpage. Here's an example:
@misc{openskill_lua_2024,
title = {{openskill.lua}},
year = {2024},
url = {https://github.com/bstummer/openskill.lua},
note = {Accessed: 2024-01-06}
}
Submitting author: !--author-handle-->@vivekjoshy<!--end-author-handle-- (Vivek Joshy) Repository: https://github.com/OpenDebates/openskill.py Branch with paper.md (empty if default branch): main Version: v5.1.0 Editor: !--editor-->@vissarion<!--end-editor-- Reviewers: @Naeemkh, @matt-graham Archive: 10.5281/zenodo.8280051
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Naeemkh & @matt-graham, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @vissarion know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @Naeemkh
📝 Checklist for @matt-graham