Closed editorialbot closed 8 months ago
Hello human, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.01 s (461.8 files/s, 77299.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
TeX 2 59 0 473
Markdown 2 77 0 205
YAML 1 1 4 18
-------------------------------------------------------------------------------
SUM: 5 137 4 696
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 3752
Failed to discover a valid open source license
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/2014GL059576 is OK
- 10.3390/rs12182959 is OK
- 10.1073/pnas.1718850115 is OK
- 10.1071/WF05077 is OK
- 10.6028/NIST.SP.1215 is OK
- 10.1007/978-3-319-51727-8_92-1 is OK
- 10.1038/s41597-022-01343-0 is OK
- 10.52324/001c.8285 is OK
- 10.5281/zenodo.5597138 is OK
- 10.5281/zenodo.10460075 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot set as branch
Done! branch is now
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Five most similar historical JOSS papers:
pyCSEP: A Python Toolkit For Earthquake Forecast Developers
Submitting author: @wsavran
Handling editor: @kbarnhart (Retired)
Reviewers: @nvanderelst, @mbarall
Similarity score: 0.8014
flux-data-qaqc: A Python Package for Energy Balance Closure and Post-Processing of Eddy Flux Data
Submitting author: @JohnVolk
Handling editor: @pdebuyl (Active)
Reviewers: @ashwinvis, @dgketchum
Similarity score: 0.7969
gdess: A framework for evaluating simulated atmospheric CO₂ in Earth System Models
Submitting author: @dkauf42
Handling editor: @dhhagan (Active)
Reviewers: @slayoo, @simonom
Similarity score: 0.7914
ComPlot: Comparison Plotter to visually evaluate ocean model simulations
Submitting author: @mvhulten
Handling editor: @lheagy (Retired)
Reviewers: @AnsleyManke
Similarity score: 0.7901
pyveg: A Python package for analysing the time evolution of patterned vegetation using Google Earth Engine
Submitting author: @samvanstroud
Handling editor: @usethedata (Retired)
Reviewers: @arbennett, @usethedata
Similarity score: 0.7889
⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before considering asking the reviewers of these papers to review again for JOSS.
@editorialbot check repository
@editorialbot check repository
@editorialbot set FEDS-PEC-Protected as branch
Done! branch is now FEDS-PEC-Protected
@editorialbot check repository
Software report:
github.com/AlDanial/cloc v 1.88 T=10.02 s (2.0 files/s, 709.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 4 322 370 930
Jupyter Notebook 9 0 4123 768
YAML 2 1 4 287
Markdown 1 47 0 223
Bourne Shell 1 7 5 21
JSON 3 0 0 3
-------------------------------------------------------------------------------
SUM: 20 377 4502 2232
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Failed to discover a Statement of need
section in paper
@editorialbot set paper as branch
Done! branch is now paper
Hi @ksharonin and thanks for your submission. There are a few things to start:
@editorialbot query scope
Submission flagged for editorial review.
Hi @kthyng , thank you for initiating the review process. Regarding your questions:
demo
directory with specific notebooks that when run should verify the software functionality. Should we make a separate explicit test directory?CC'ing collaborator @mccabete
@ksharonin
designed for maintainable extension
I'll let you know what I hear from the editorial board, thanks.
Thanks for the notes @kthyng.
For the scholarly effort question, I can talk about why myself and the fire scientists I work with find Katrina's paper exciting.
We are fire-scientists that use satellite imagery to make perimeters of fires. Satellite-based perimeters have better spatial, and sometimes temporal coverage compared to state-of-the-art aircraft measurements, but satellite-based-perimeters are rarely evaluated for accuracy.
This paper solves a few hard problems associated with comparing perimeters:
Calculates metrics of spatial similarity.
Identifies “matching” events in different databases, even when the events might have different locations, shapes, and timestamps.
Can automatically compare perimeters for two important fire perimeter APIs
Options for individuals to compare local datasets.
We anticipate using this work for:
In particular, having a demo notebook for this code is well-matched to how this software would be used. The scientists that this was built for are familiar with notebooks. Demonstrating how this code is used in an example notebook is key for adoption.
Thanks for considering it.
[I'm writing as another JOSS editor trying to review the scope of this submission]
JOSS is happy with notebooks as demonstrations of how to use code, but the main JOSS review is of the code that the notebook demonstrates. This is the part that needs to be designed for maintainable execution.
Where is this code? I was looking for a packaged library, but can't find one. Is the code the Python files in the main directory of the FEDS-PEC-Protected branch?
Also, given that this is described as a "module" in the About, does that imply that there is a larger software package that this code is used in? If so, how is such a module loaded into that larger package and then used?
Hi @danielskatz
Thanks @ksharonin - thanks for the update, this is helpful. I'm not sure about the right language, but if this passes the scope review, I'm sure the reviewers can give feedback.
@ksharonin @mccabete Thanks for the information. I am a big fan of notebooks and think they are a great way to demonstrate usage of software. But, I don't think they are a good place to keep any of the actual code of a package that is going to be used, reused, worked on by others, and changed over time. Notebooks are notoriously difficult to track changes in over time with git. I am open to more arguments and info from you, but here are my thoughts:
I have these detailed questions about your submission given your emphasis on notebooks in this case.
@kthyng @mccabete Thank you for your points; I was unaware IPYNB was not a good choice for git, and I am absolutely open to re-formatting our work to improve the current state
Should I await scope review's decision before formally pursuing these changes?
@ksharonin Give it a little more time before getting started. I'll try to get back to you soon.
@ksharonin Ok so this submission is a bit tricky. It looks like there is good research software at the core, but it also needs some work to make it through the JOSS review process because the packaging and some other details are not up to par.
The main issues are with packaging (ideally should be able to pip install the package through PyPI), need unit tests to test the bits of functionality in the code (not just the full examples), there should ideally be a set of docs hosted by readthedocs or otherwise easily accessible that can consist of those example notebooks and a bunch of the info from your readme. A great resource for this is https://learn.scientific-python.org/development/ — tons of up to date info on these pages.
Other issues:
Would you like to try to address these issues and pursue publishing in JOSS? If so, we can pause this pre-review issue if you can address the comments in the next few weeks. Or probably better you could withdraw the submission and resubmit later after fixing things up. Alternatively, you may be able to publish this in a different venue without needing to put in additional work to pass JOSS standards.
Hi @kthyng , apologies for the delayed reply!
We thank you for laying out the options for our submission. Based on your feedback, we will withdraw our submission and resubmit with the corrections you suggest. These include the pypackages, unit tests, documentation, and updating the JOSS paper to meet the length standards. I aim to complete these changes by the end of my academic semester.
Thank you for helping us navigate the JOSS publication process, we sincerely appreciate it!
CC'ing @mccabete
@ksharonin Ok sounds great! When you resubmit, please link to this issue so we have all the context.
@editorialbot withdraw
Paper withdrawn.
Submitting author: !--author-handle-->@ksharonin<!--end-author-handle-- (Katrina Sharonin) Repository: https://github.com/ksharonin/feds-benchmarking Branch with paper.md (empty if default branch): paper Version: v.1.0.0 Editor: Pending Reviewers: Pending Managing EiC: Kristen Thyng
Status
Status badge code:
Author instructions
Thanks for submitting your paper to JOSS @ksharonin. Currently, there isn't a JOSS editor assigned to your paper.
@ksharonin if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). You can search the list of people that have already agreed to review and may be suitable for this submission.
Editor instructions
The JOSS submission bot @editorialbot is here to help you find and assign reviewers and start the main review. To find out what @editorialbot can do for you type: