Closed editorialbot closed 3 days ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.25080/majora-212e5952-018 is OK
- 10.2307/2413638 is OK
- 10.25080/gerudo-f2bc6f59-00f is OK
- 10.1093/molbev/msl072 is OK
- 10.1016/0025-5564(81)90043-2 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (579.0 files/s, 192692.8 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Bourne Again Shell 2 308 722 5096
XML 6 0 0 4149
Python 9 491 949 1186
CSV 12 677 0 811
Perl 2 290 104 670
Ruby 1 64 66 375
Markdown 3 109 0 275
YAML 10 15 55 205
TOML 1 8 0 66
TeX 1 0 0 51
DOS Batch 2 7 6 28
PowerShell 1 4 161 11
make 1 3 0 10
--------------------------------------------------------------------------------
SUM: 51 1976 2063 12933
--------------------------------------------------------------------------------
Commit count by author:
209 Nadia Tahiri, PhD
169 my-linh-luu
56 Georges Marceau
54 Nadia Tahiri
37 slepaget
30 cetmus
25 db036
25 geomarceau
18 Elie Maalouf
14 francis.lewis07@gmail.com
11 Simon Lepage-Trudeau
9 simlal
8 Alex
4 Wanlin Li
2 Marc-Antoine Bélisle
2 Mus
2 Nadia Tahiri, Ph. D
2 TahiriNadia
2 jsDesm
1 KarlP910
Paper file info:
📄 Wordcount for paper.md
is 1560
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
@editorialbot check references
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.25080/majora-212e5952-018 is OK
- 10.2307/2413638 is OK
- 10.25080/gerudo-f2bc6f59-00f is OK
- 10.1093/molbev/msl072 is OK
- 10.1016/0025-5564(81)90043-2 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@TahiriNadia - we've now started the "review" thread on github, ie, here. Please use this comment thread to ask questions and, once initial reviews are available, to respond to issues that the reviewers raise.
@mmore500 - how is the review going? Do you have any questions?
Thanks for checking in. No issues so far! I have some time set aside shortly to sit down and complete my review.
Thanks for checking in. No issues so far! I have some time set aside shortly to sit down and complete my review.
This sounds great, @mmore500 ! Please let me know if any questions arise. Thank you again!
@annazhukova - how is the review going? do you have any questions yet?
Some comments on the manuscript. Planning to follow up on the software content shortly.
There is a grammar issue in “between a genetic of species and its habitat during the reconstruction”
Address in more specific terms what specific scientific question(s) an be addressed through these analyses.
What windows are you referring to?
Citations to the software would be appropriate.
a specific application example or case study would greatly benefit the clarity of the manuscript
I have filled in my checklist, and here are a few comments:
- [ ] Contribution and authorship: Has the submitting author (@TahiriNadia) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
I have not ticked this iteam as looking at the contributors page I saw that the user my-linh-luu seemed to have contributed substantially to the software but does not seem to be on the authors’ list
- [ ] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
From what I understood from the guidlines, the software needs to be already quite established (cited, used) which does not yet seem to be the case here.
Some factors that may be considered by editors and reviewers when judging effort include: Age of software (is this a well-established software project) / length of commit history
I saw that the first tag was in June 2022, so one would expect a quite established software with quite some usages
Number of commits:
I have counted about 600 on the contributors page
Number of authors:
3 (on the paper, more on github)
Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.
I have assessed the LOC with wc -l *.py
command on the aphylogeo folder: 2152
Whether the software has already been cited in academic papers.
According to google-scholar the only citation is a self-citation
Whether the software is sufficiently useful that it is likely to be cited by your peer group.
I think that a clear example of an analysis pipeline with the software would highly increase the chances of future citations (as people would know how to use the software for their data).
Functionality
- [] Installation: Does installation proceed as outlined in the documentation?
See this issue
- [ ] Functionality: Have the functional claims of the software been confirmed?
I haven't managed to install it (see above)
- [ ] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
I haven't managed to install it (see above)
Documentation
- [ ] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
See issue
- [ ] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
See issue
- [ x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
There are guidlines but they could be better illustrated: see issue
Software paper
- [ ] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
From what I understood at first the goal of the software is to allow to analyse the correlation between the climate and species evolution. Though reading further and especially looking at the figure, it seems to me that the goal might be to select gene regions that have the most correlation (?) Overall I think a use case, an example of a data analysis with aPhyloGeo in the article (and the corresponding data and code available and described in GitHub) would highly facilitate understanding the goals.
- [ ] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
There is the corresponding section, but it reads to me a bit too general. A more concrete example would help here too.
- [ ] State of the field: Do the authors describe how this software compares to other commonly-used packages?
The authors only mention their own previous work in this section. I would expect here seeing what can be done with other packages, for instance used in classical phylogeography: Ancestral Character Reconstruction for geographic and or climate characters, GLM with climate as a factor etc.
- [ ] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
Adding more information to the State of the field section would add more references too
@TahiriNadia - it looks like @mmore500 and @annazhukova have offered comments on the submission. Do you have any questions on how to proceed?
@mmore500 - when it's possible, please place check marks in your checklist above. It seems to be empty right now. Thanks again!
Apologies for the delay. I thought I had posted some comments on the manuscript much earlier, but the post must have not gone through. Luckily, I was able to find a copy of them and share them in https://github.com/tahiri-lab/aPhyloGeo/issues/49
edit: nevermind, I found those earlier in the thread.
I have updated my checklist, using x
to mark complete items and -
to mark items for which I have opened suggestions on the aPhyloGeo issue tracker. I'll keep an eye on these getting resolved, but please don't hesitate to @ me when it's time to take another look
@TahiriNadia - it looks like @mmore500 and @annazhukova have offered comments on the submission. Do you have any questions on how to proceed?
Thank you for your comments. We will respond to each of them individually (with one of the co-authors, @geomarceau).
I have updated my checklist, using
x
to mark complete items and-
to mark items for which I have opened suggestions on the aPhyloGeo issue tracker. I'll keep an eye on these getting resolved, but please don't hesitate to @ me when it's time to take another look
Thank you @mmore500.
@TahiriNadia - do you have any questions about the needed revisions?
@TahiriNadia - do you have any questions about the needed revisions?
Thank you, @fboehm, update. We plan to cover all the reviews during the following week.
That sounds good, @TahiriNadia ! Please let me know if questions come up
Some comments on the manuscript. Planning to follow up on the software content shortly.
Introduction
There is a grammar issue in “between a genetic of species and its habitat during the reconstruction”
Statement of Need
Address in more specific terms what specific scientific question(s) an be addressed through these analyses.
State of the Field
- by topological similarity, do you mean topological agreement?
- Consider phrasing in terms of first person (we) when discussing your own group
figure
- in addition to the legend, it may make sense to additionally directly label the color background boxes (in addition to the symbol glyph labels, toes are nice!!)
- where possible, increase the font size, most of the figure can only be read by zooming way in.
- why are the biopython and python logos in the corner? if it’s meant as an acknowledgement, I think that would be better suited to other parts of the paper)
- what is a climate tree? This hasn’t yet been explicitly defined
Pipeline:
- Can you clarify this sentence: “…, forming the basis from which users obtain output data with essential calculations.” What data nd what calculations?
- In discussion of the figure, it’s unclear what “refer to the YAML file” means —— that hasn’t yet been introduced
- rephrase “optimal” as “optimized”
Multiprocessing:
What windows are you referring to?
Dependencies:
Citations to the software would be appropriate.
Conclusion:
- What specific problems or circumstances will the new methods allow to be tackled?
- “high standards in software development” —> “best practices in software development”
- Could the closing two sentences of the conclusion be made more specific and concrete? As written, they could describe a large number of software projects.
Overall:
a specific application example or case study would greatly benefit the clarity of the manuscript
@mmore500, thank you for your insightful review and constructive feedback. Your comments were instrumental in guiding the revisions we made to the manuscript. Here's a summary of how we've addressed your specific concerns:
In addition, we emphasize the real-world applications of aPhyloGeo in several key areas:
To further showcase the practical utility of aPhyloGeo, we're actively using it in our ongoing project, iphylogeo++, available at GitHub. We sincerely appreciate your time and expertise. Please don't hesitate to share any additional thoughts or suggestions you may have.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I have filled in my checklist, and here are a few comments:
- [ ] Contribution and authorship: Has the submitting author (@TahiriNadia) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
I have not ticked this iteam as looking at the contributors page I saw that the user my-linh-luu seemed to have contributed substantially to the software but does not seem to be on the authors’ list
- [ ] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
From what I understood from the guidlines, the software needs to be already quite established (cited, used) which does not yet seem to be the case here.
Some factors that may be considered by editors and reviewers when judging effort include: Age of software (is this a well-established software project) / length of commit history
I saw that the first tag was in June 2022, so one would expect a quite established software with quite some usages
Number of commits:
I have counted about 600 on the contributors page
Number of authors:
3 (on the paper, more on github)
Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.
I have assessed the LOC with
wc -l *.py
command on the aphylogeo folder: 2152Whether the software has already been cited in academic papers.
According to google-scholar the only citation is a self-citation
Whether the software is sufficiently useful that it is likely to be cited by your peer group.
I think that a clear example of an analysis pipeline with the software would highly increase the chances of future citations (as people would know how to use the software for their data).
Functionality
- [] Installation: Does installation proceed as outlined in the documentation?
See this issue
- [ ] Functionality: Have the functional claims of the software been confirmed?
I haven't managed to install it (see above)
- [ ] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
I haven't managed to install it (see above)
Documentation
- [ ] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
See issue
- [ ] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
See issue
- [ x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
There are guidlines but they could be better illustrated: see issue
Software paper
- [ ] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
From what I understood at first the goal of the software is to allow to analyse the correlation between the climate and species evolution. Though reading further and especially looking at the figure, it seems to me that the goal might be to select gene regions that have the most correlation (?) Overall I think a use case, an example of a data analysis with aPhyloGeo in the article (and the corresponding data and code available and described in GitHub) would highly facilitate understanding the goals.
- [ ] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
There is the corresponding section, but it reads to me a bit too general. A more concrete example would help here too.
- [ ] State of the field: Do the authors describe how this software compares to other commonly-used packages?
The authors only mention their own previous work in this section. I would expect here seeing what can be done with other packages, for instance used in classical phylogeography: Ancestral Character Reconstruction for geographic and or climate characters, GLM with climate as a factor etc.
- [ ] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
Adding more information to the State of the field section would add more references too
Thank you for your thorough review and valuable feedback. We appreciate you taking the time to evaluate aPhyloGeo and providing your insights. We understand your concerns regarding authorship and contributions, and we want to assure you that all contributors have been duly recognized. The author list and contribution statement have been updated to accurately reflect their involvement. Good remark, regarding software maturity, we acknowledge that aPhyloGeo was registered in 2022. However, it has undergone significant development since then, culminating in the current v1.0 release, which we consider a stable and reliable version. We have also expanded the documentation and included more examples to enhance user experience. We acknowledge the limited number of citations for aPhyloGeo, which we attribute to the software's relative novelty. We are actively promoting the software and are confident that its usage and citations will increase as more researchers become aware of its capabilities. Good observation! Indeed, there are 600 commits listed on the contributors' page and over 2000 lines of code (LOC) in the GitHub repository. Regarding the citation, we have updated the citation to 19 references including two of our lab. In response to your suggestions, we have updated the wiki to clarify the software's functionality. Additionally, we already have added a tutorial in to guide users through the analysis process step-by-step. We believe that the changes we have made effectively address the concerns raised in your review. We are confident that aPhyloGeo is a valuable tool for researchers investigating the relationship between phylogenetic trees and climatic parameters. Thank you again for your valuable contribution.
We are pleased to confirm that all issues raised during the review process have been fully addressed. The feedback provided by the editor (@fboehm) and reviewers (@mmore500 & @annazhukova) has been invaluable in refining our work and ensuring its adherence to the highest standards of quality and rigor. All identified revisions have been implemented through the corresponding pull requests.
We are deeply appreciative of the time and expertise contributed by the editor and reviewers, and we believe their thoughtful comments have significantly strengthened our project. We consider this issue fully resolved and look forward to the next steps in the publication process.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I will return shortly to continue working through my checklist. Thanks for the update!
I will return shortly to continue working through my checklist. Thanks for the update!
Thank you @mmore500.
Excellent work, everyone! Please continue to mention me (@fboehm) in comments so that I continue to see "github notifications" when you write. I'll try to monitor closely in the days ahead to ensure that the review proceeds in a timely manner towards publication. please let me know if any questions come up :)
@editorialbot commands
Hello @mmore500, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Run checks and provide information on the repository and the paper file
@editorialbot check repository
# Check the references of the paper for missing DOIs
@editorialbot check references
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@editorialbot check repository
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (571.0 files/s, 192039.5 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Bourne Again Shell 2 308 722 5096
XML 6 0 0 4149
Python 9 491 949 1186
CSV 12 677 0 811
Perl 2 290 104 670
Ruby 1 64 66 375
Markdown 3 111 0 294
TeX 1 0 0 210
YAML 10 15 55 205
TOML 1 8 0 66
DOS Batch 2 7 6 28
PowerShell 1 4 161 11
make 1 3 0 10
--------------------------------------------------------------------------------
SUM: 51 1978 2063 13111
--------------------------------------------------------------------------------
Commit count by author:
209 Nadia Tahiri, PhD
169 my-linh-luu
67 Nadia Tahiri
56 Georges Marceau
37 slepaget
36 Hazem Ben Said
30 cetmus
25 db036
25 geomarceau
18 Elie Maalouf
14 francis.lewis07@gmail.com
11 Simon Lepage-Trudeau
9 simlal
8 Alex
4 Wanlin Li
2 Marc-Antoine Bélisle
2 Mus
2 Nadia Tahiri, Ph. D
2 TahiriNadia
2 jsDesm
1 KarlP910
Paper file info:
📄 Wordcount for paper.md
is 2432
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
Thank you to the authors for their revisions to the manuscript. They have been helpful to providing a clearer understanding of the software and underlying methods. I have raised some further comments in https://github.com/tahiri-lab/aPhyloGeo/issues/49#issuecomment-2156182763
One concern is the current word count, which is 2400. As I mentioned in the review, I believe that there is potential for good space savings with some focused editing for brevity in how things are phrased, etc.
I plan to follow up in testing the software on my own machine shortly.
@editorialbot check repository
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (569.6 files/s, 191625.0 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Bourne Again Shell 2 308 722 5096
XML 6 0 0 4149
Python 9 491 949 1186
CSV 12 677 0 811
Perl 2 290 104 670
Ruby 1 64 66 375
Markdown 3 115 0 294
TeX 1 0 0 210
YAML 10 15 55 205
TOML 1 8 0 66
DOS Batch 2 7 6 28
PowerShell 1 4 161 11
make 1 3 0 10
--------------------------------------------------------------------------------
SUM: 51 1982 2063 13111
--------------------------------------------------------------------------------
Commit count by author:
209 Nadia Tahiri, PhD
169 my-linh-luu
67 Nadia Tahiri
56 Georges Marceau
37 slepaget
36 Hazem Ben Said
30 cetmus
25 db036
25 geomarceau
18 Elie Maalouf
14 francis.lewis07@gmail.com
11 Simon Lepage-Trudeau
9 simlal
8 Alex
4 Wanlin Li
2 Marc-Antoine Bélisle
2 Mus
2 Nadia Tahiri, Ph. D
2 TahiriNadia
2 jsDesm
1 KarlP910
1 Matthew Andres Moreno
Paper file info:
📄 Wordcount for paper.md
is 2379
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
@editorialbot check repository
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (571.5 files/s, 192091.2 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Bourne Again Shell 2 308 722 5096
XML 6 0 0 4149
Python 9 491 949 1186
CSV 12 677 0 811
Perl 2 290 104 670
Ruby 1 64 66 375
Markdown 3 113 0 282
TeX 1 0 0 210
YAML 10 15 55 205
TOML 1 8 0 66
DOS Batch 2 7 6 28
PowerShell 1 4 161 11
make 1 3 0 10
--------------------------------------------------------------------------------
SUM: 51 1980 2063 13099
--------------------------------------------------------------------------------
Commit count by author:
209 Nadia Tahiri, PhD
169 my-linh-luu
68 Nadia Tahiri
56 Georges Marceau
37 slepaget
36 Hazem Ben Said
30 cetmus
25 db036
25 geomarceau
18 Elie Maalouf
14 francis.lewis07@gmail.com
11 Simon Lepage-Trudeau
9 simlal
8 Alex
4 Wanlin Li
2 Marc-Antoine Bélisle
2 Mus
2 Nadia Tahiri, Ph. D
2 TahiriNadia
2 jsDesm
1 KarlP910
1 Matthew Andres Moreno
Submitting author: !--author-handle-->@TahiriNadia<!--end-author-handle-- (Nadia Tahiri) Repository: https://github.com/tahiri-lab/aPhyloGeo Branch with paper.md (empty if default branch): joss-journal Version: v1.0.0 Editor: !--editor-->@arfon<!--end-editor-- Reviewers: @annazhukova, @mmore500, @theosanderson Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@annazhukova & @mmore500, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fboehm know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @annazhukova
📝 Checklist for @mmore500
📝 Checklist for @theosanderson