Closed editorialbot closed 4 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (1606.5 files/s, 135681.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 109 1402 2189 5368
JSON 22 0 0 1713
Markdown 4 101 0 279
TeX 1 7 0 140
Jupyter Notebook 1 0 399 111
YAML 2 15 1 81
TOML 1 1 0 17
-------------------------------------------------------------------------------
SUM: 140 1526 2589 7709
-------------------------------------------------------------------------------
Commit count by author:
62 Devrim
19 devrimcavusoglu
7 fcakyon
5 Ulaş "Sophylax" Sert
1 Ikko Eltociear Ashimine
1 Nish
1 Zafer Cavdar
1 cemilcengiz
1 devrim.cavusoglu
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.18653/v1/W18-5446 is OK
- 10.18653/v1/N19-1423 is OK
- 10.18653/v1/2021.emnlp-demo.21 is OK
- 10.3115/1073083.1073135 is OK
MISSING DOIs
- No DOI given, and none found for title: Relevance of Unsupervised Metrics in Task-Oriented...
- No DOI given, and none found for title: Findings of the 2020 Conference on Machine Transla...
- No DOI given, and none found for title: XLNet: Generalized Autoregressive Pretraining for ...
INVALID DOIs
- None
Paper file info:
📄 Wordcount for paper.md
is 771
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
👋 @devrimcavusoglu , @evamaxfield , and @KennethEnevoldsen - This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above.
Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/6452 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@crvernon I will start checking these off tomorrow and likely finish next week unless I run into any major issues.
@devrimcavusoglu this looks like a wonderful work, which I am sure will be beneficial to many people (seems like it already is). Overall I was able to run both the examples, tests and installation without issues and found that almost all of the information (required and desired) was available.
I have a few things I was hoping you could clear up:
@devrimcavusoglu seconding that this is really great and I will likely start using it in some of my own work :)
Some comments:
pip install jury[bertscore]
. It also allows you to put version pins on the extra dependencies to make sure that the version also works with your software.@evamaxfield to your question:
I am fine with not having a full documentation page or website but I do think it would be nice to have. At the very least can you include a link to the examples notebook somewhere in the README as "See more examples here" or something. (Although I guess this might be a question for @crvernon -- there is API docs in the code but to my knowledge I didn't see a documentation website where those API docs can be easily seen, is this an issue? I am fine with it because I am fine with digging through the source code but for others / general dev experience it may be a nice to add.)
We don't require documentation to be hosted separately; though it is a nice complement that many reviewers have started to point out. I would say this falls under the category of suggestion but not requirement as long as the codebase has the necessary documentation that is visible to the user. Thanks!
Sounds good! Then my comment resolves to simply adding a link to the examples notebook.
@KennethEnevoldsen @evamaxfield Thank you for the reviews and feedback. Apologizes for a late response. I will try to respond to items that you mentioned.
Response to @KennethEnevoldsen:
- it seems like there are no guidelines for how to report issues or seek support. I would add those in (it can simply be a table e.g. as seen here.
We did not actually documented this as we think for most of the open-source software the Github platform is the de facto standard for raising issues, bug reports, discussions and etc. If we were using another platform for those stuff we could've mentioned it which is reasonable; however, utilizing the standard does not necessitate this as it is the standard and easy to be grasped by all users. Nonetheless, we particularly included templates for the issues (e.g. bug reports, feature/metric request).
- Have the metrics been evaluated against the implementation of e.g. Huggingface metrics (where meaningful)?
Yes, there is a comparison in the paper, particularly Fig. 2. Jury is compared to HF metrics package in two different perspectives, input size and number of metrics.
Response to @evamaxfield:
Sounds good! Then my comment resolves to simply adding a link to the examples notebook.
Actually, we do have link to the notebook as a Google Colab link although it's not expressed and written down, for example, in a table in README. This way we think is more user friendly as the users can directly engage with the package and play around.
(Not required but I tend to think its nice) I tried BERTScore in addition to the defaults and it asked me to install bert-score which I completely understand not including as a default install. I generally like it when packages provide an extra install for the required extra dependency(ies). For example pip install jury[bertscore]. It also allows you to put version pins on the extra dependencies to make sure that the version also works with your software.
That is a really good suggestion, we may definitely plan to add this feature in upcoming versions.
We did not actually documented this as we think for most of the open-source software the Github platform is the de facto standard for raising issues, bug reports, discussions and etc. If we were using another platform for those stuff we could've mentioned it which is reasonable; however, utilizing the standard does not necessitate this as it is the standard and easy to be grasped by all users. Nonetheless, we particularly included templates for the issues (e.g. bug reports, feature/metric request).
I am perfectly fine with this as long as the editors (@crvernon) agree. I will leave it unchecked on the checklist.
Yes, there is a comparison in the paper, particularly Fig. 2. Jury is compared to HF metrics package in two different perspectives, input size and number of metrics.
Thanks for the clarification
I believe this clarifies all my points.
I definitely missed the open in collab button. I will check that off and the contributing section.
All good from my review :)
Per @KennethEnevoldsen 's comment...
We did not actually documented this as we think for most of the open-source software the Github platform is the de facto standard for raising issues, bug reports, discussions and etc. If we were using another platform for those stuff we could've mentioned it which is reasonable; however, utilizing the standard does not necessitate this as it is the standard and easy to be grasped by all users. Nonetheless, we particularly included templates for the issues (e.g. bug reports, feature/metric request).
I am perfectly fine with this as long as the editors (@crvernon) agree. I will leave it unchecked on the checklist.
I do believe there needs to be a bit more here to meet the guidelines for:
Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Even though I do agree that many users may be aware of what issues and pull requests are, our papers go out to a general audience. Thus, questions and contributions may not always come in the form of code modifications/additions from engineers, but from program managers, etc. who seek clarity on the purpose of the software in the context of their mission. These audiences often do not have familiarity with how GitHub works. So I do advise @devrimcavusoglu to be more descriptive for how to contribute. Feel free to check out previous JOSS publications for some examples. Thanks!
Even though I do agree that many users may be aware of what issues and pull requests are, our papers go out to a general audience. Thus, questions and contributions may not always come in the form of code modifications/additions from engineers, but from program managers, etc. who seek clarity on the purpose of the software in the context of their mission. These audiences often do not have familiarity with how GitHub works. So I do advise @devrimcavusoglu to be more descriptive for how to contribute. Feel free to check out previous JOSS publications for some examples. Thanks!
@crvernon I see the point, thanks for clarfying. I will make appropriate updates in the repo ASAP and will ping you then.
:wave: @devrimcavusoglu, @evamaxfield, and @KennethEnevoldsen
I'm just checking in to see how things are going. I believe @evamaxfield has finished their review and there were a few things left to do pertaining to the review from @KennethEnevoldsen. Will you please provide a short update to how things are going? Thanks!
With the addition of community guidelines I don't believe there is any additions I want to add. I believe it is a good paper/package which provides solid tools for developers.
Great, thanks @KennethEnevoldsen!
@devrimcavusoglu let me know when you have the community guidelines element addressed and are ready for me to take my final look.
@crvernon I updated the repo for regarding changes. You can access to the community guidline in the repo.
Thank you for review and fruitful discussion. @KennethEnevoldsen @evamaxfield @crvernon
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:wave: @devrimcavusoglu - Great work on this! I just need you to address a few minor things in the paper and we can move forward with next steps.
Paper:
datasets
package to promote the open-source contribution..." perhaps change this to "Our library also utilizes the datasets
package to promote open-source contribution..." datasets
package..." should be "...released by the datasets
package..."@crvernon Thank you for the comments. The according changes are now applied, and also I refactored the references scheme and correct them. All issues should be addressed currently. If there are issues, I will refactor the artifacts further.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.18653/v1/W18-5446 is OK
- 10.18653/v1/N19-1423 is OK
- 10.18653/v1/2021.emnlp-demo.21 is OK
- 10.3115/1073083.1073135 is OK
MISSING DOIs
- No DOI given, and none found for title: Relevance of Unsupervised Metrics in Task-Oriented...
- No DOI given, and none found for title: Findings of the 2020 Conference on Machine Transla...
- No DOI given, and none found for title: XLNet: Generalized Autoregressive Pretraining for ...
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@devrimcavusoglu - Thanks! However, there was no need to refactor the references. The previous version was how we prefer them to be used in JOSS. Please revert the citation and reference format back to the way they were.
Also, this phrase in the paper is still incorrect:
LINES 56-58
Current version:
Our library also utilizes
datasets
package to promote the open-source contribution; when users implement metrics, the implementation can be contributed to thedatasets
package.
How the sentence should read:
Our library also utilizes the
datasets
package to promote open-source contribution; when users implement metrics, the implementation can be contributed to thedatasets
package.
Please let me know when these changes have been made.
:wave: Just following up on the status of the above @devrimcavusoglu, thanks!
Hi @crvernon, sorry for the delayed response, the changes are now applied as requested.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.18653/v1/W18-5446 is OK
- 10.18653/v1/N19-1423 is OK
- 10.18653/v1/2021.emnlp-demo.21 is OK
- 10.3115/1073083.1073135 is OK
MISSING DOIs
- No DOI given, and none found for title: Relevance of Unsupervised Metrics in Task-Oriented...
- No DOI given, and none found for title: Findings of the 2020 Conference on Machine Transla...
- No DOI given, and none found for title: XLNet: Generalized Autoregressive Pretraining for ...
INVALID DOIs
- None
This is looking great @devrimcavusoglu! Just one more minor thing to address in the paper before we move on...
On LINE 80, "Bleu" should appear in all caps as "BLEU". You can make this happen by bracketing (e.g., {BLEU}) the letters you wish to remain capitalized in the bib file.
Let me know when this has been updated and we will move on to the next step. Thanks!
On LINE 80, "Bleu" should appear in all caps as "BLEU". You can make this happen by bracketing (e.g., {BLEU}) the letters you wish to remain capitalized in the bib file.
This is intended, I didn't want to modify the bibtex citation given by ACL, refer to the publication page for citations. I used the bib citation as is.
Great to know! Thanks!
👋 @devrimcavusoglu - we are almost there! Next is just setting up the archive for your new release if you have not done so already.
We want to make sure the archival has the correct metadata that JOSS requires. This includes a title that matches the paper title and a correct author list.
So here is what we have left to do:
[x] Conduct a GitHub release of the current reviewed version of the software and archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository). Please ensure that the software archive uses the same license as the license you have posted on GitHub.
[x] Check the archival deposit (e.g., in Zenodo) to ensure it has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
[x] Please respond with the DOI of the archived version here
I can then move forward with accepting the submission.
@devrimcavusoglu just a reminder of the above. Thanks!
Hi @crvernon, thanks for the follow-up, actually jury
already has a Zenodo archive. I can update a new version with the latest changes, I'm not sure I should do so on the joss-paper
branch including the JOSS paper or in main
branch. Can you please clarify if new version is required (if not you can just go with the available zenodo page) ?
@devrimcavusoglu it looks like your joss-paper
branch is a few commits behind, and a few ahead, of main. Are you planning on bringing that branch into main? Are the only differences in the ahead behind count due to paper edits? If so, we can use the version you already have archived. Just let me know.
@devrimcavusoglu it looks like your
joss-paper
branch is a few commits behind, and a few ahead, of main. Are you planning on bringing that branch into main? Are the only differences in the ahead behind count due to paper edits? If so, we can use the version you already have archived. Just let me know.
Hi @crvernon, yes exactly, the only differences are in documentation and joss paper, not in the source code.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@devrimcavusoglu could you release a new version that shows your most updated code and then archive that new version on Zenodo? BTW...you can link your GitHub repository with Zenodo to do this automatically upon release. Linking it to build off of your preexisting archive is fine.
@devrimcavusoglu just checking back in on the above. Thanks!
Submitting author: !--author-handle-->@devrimcavusoglu<!--end-author-handle-- (Devrim Çavuşoğlu) Repository: https://github.com/obss/jury Branch with paper.md (empty if default branch): joss-paper Version: v2.3.1 Editor: !--editor-->@crvernon<!--end-editor-- Reviewers: @evamaxfield, @KennethEnevoldsen Archive: 10.5281/zenodo.11170894
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@evamaxfield & @KennethEnevoldsen, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @evamaxfield
📝 Checklist for @KennethEnevoldsen