Open editorialbot opened 1 month ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
✅ OK DOIs
- 10.1002/anie.200603675 is OK
- 10.1039/D3MA00136A is OK
- 10.6028/jres.117.010 is OK
- 10.1063/5.0032116 is OK
- 10.21105/joss.06371 is OK
- 10.1080/08940886.2019.1608121 is OK
🟡 SKIP DOIs
- No DOI given, and none found for title: Materials Acceleration Platform: Accelerating Adva...
- No DOI given, and none found for title: Bayesian Optimization of Spray Parameters for the ...
❌ MISSING DOIs
- None
❌ INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.90 T=0.10 s (1005.5 files/s, 287728.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
XML 3 14 0 11151
Scheme 2 26 0 9545
Python 40 1022 1170 3917
Markdown 20 217 0 742
reStructuredText 26 313 480 372
Jupyter Notebook 4 0 326 113
TeX 1 8 0 78
TOML 1 4 2 51
Arduino Sketch 1 5 0 50
YAML 3 9 11 49
CSS 1 7 0 38
INI 2 13 0 27
-------------------------------------------------------------------------------
SUM: 104 1638 1989 26133
-------------------------------------------------------------------------------
Commit count by author:
29 mxwalbert
3 selinawillswissen
Paper file info:
📄 Wordcount for paper.md
is 407
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@mxwalbert Thank you for your submission to JOSS. I went through the paper and repository and I have concerns about the scholarly effort and necessity of the library. Here I list one by one: 1) Need and Novelty: the manuscript fails to show the problems in the field what the library attempts to solve. At the same time, it is not clear at all what the library brings to the table. The manuscript should clearly respond to the questions: What gaps exist in the field that this library fills? What software currently exists, if any, and why can’t they tackle those issue? How does this library compare to and trumps over those? 2) Functionality not mentioned: you make a good attempt on briefly mentioning the importance of accelerating materials research and combinatorial analysis. However, there is no mention on exactly how or what the library helps you do. The manuscript includes vague and generic phrases such as “simplify the process of…” and “…provides researchers with a with flexible and modular framework…”. The authors should aim to explain more specifically what these advantages are so the reader can more effectively understand if it is useful to them. 3) Insufficient scholarly effort: It is not clear to me that the library gathers enough work to be published. Even though the repository has existed for over a year, it only shows a couple of periods with activity and just a handful of commits. The fact that is has been used in previous work does not mean that there was enough effort behind the library. In this same regard, the work is not published or available as pre-print, so there is no way to even verify this. 4) Lis of authors: the main author has been the main contributor without a doubt, with just a couple of contributed lines by who I believe to be the second author. The other three authors have not committed any code, so it is unclear to me why they are listed as authors. I recommend using CRediT to assign the corresponding roles to each author. You can do this in the manuscript as a separate section at the end.
Because of these concerns I have not checked the library or tried the code at all. I’ll proceed to do so after the author successfully addresses these points. Please note that these points are trying to be constructive, you may have a good piece of software here but it is just no reflecting that. I’m happy to clarify and discuss all the above and help to address some of the points mentioned.
@enricgrau Thank you for pointing out your concerns about our submission. I want to address them one by one:
- Need and Novelty: the manuscript fails to show the problems in the field what the library attempts to solve. At the same time, it is not clear at all what the library brings to the table. The manuscript should clearly respond to the questions: What gaps exist in the field that this library fills? What software currently exists, if any, and why can’t they tackle those issue? How does this library compare to and trumps over those?
In the current manuscript, we attempt to answer these questions in the Statement of need section. We mention the purpose "(...)simplify the process of setting up and executing combinatorial voltaic measurements." and briefly compare the software against other existing tools in the second paragraph. Could you please concretize what exactly raised your concern?
- Functionality not mentioned: you make a good attempt on briefly mentioning the importance of accelerating materials research and combinatorial analysis. However, there is no mention on exactly how or what the library helps you do. The manuscript includes vague and generic phrases such as “simplify the process of…” and “…provides researchers with a with flexible and modular framework…”. The authors should aim to explain more specifically what these advantages are so the reader can more effectively understand if it is useful to them.
We try to summarize the high-level functionality and advantages of the software in the first paragraph of the Statement of need section. However, we apparently fail to do so. The "vague and generic phrases" you mention are intentional because they specifically point out the advantages. We designed the software to be as abstract/generic as possible in order to make it usable for a large audience. Researchers should be able to use their measurement equipment with our software to run combinatorial measurements. Certainly, they still have to implement an API for their specific hardware to interface it with our software, but we provide an extensive guide in the documentation to do so. Then, the advantages will come into play because it's easy to setup and run experiments, store the data, collect relevant metadata and analyse the results. Furthermore, the implemented components can be reused which enables you to run a Measurement
using different Device
s. We intentionally left this level of detail out of the manuscript because it is written in the repository's readme. Do you think describing this briefly would resolve your concern?
- Insufficient scholarly effort: It is not clear to me that the library gathers enough work to be published. Even though the repository has existed for over a year, it only shows a couple of periods with activity and just a handful of commits. The fact that is has been used in previous work does not mean that there was enough effort behind the library. In this same regard, the work is not published or available as pre-print, so there is no way to even verify this.
For this submission, the scholary effort should not be measured based on the number of commits. We do not have a background in software development and most of the work was done locally before even considering a publication. I think the effort behind this project becomes evident easily by looking at the source code and documentation. As for the cited previous work, we are currently drafting the paper but some effort can be verified by the checking the hardware documentation in the repository: MA8X8.
- Lis of authors: the main author has been the main contributor without a doubt, with just a couple of contributed lines by who I believe to be the second author. The other three authors have not committed any code, so it is unclear to me why they are listed as authors. I recommend using CRediT to assign the corresponding roles to each author. You can do this in the manuscript as a separate section at the end.
The same reason from above applies here. But certainly, we will include a CRediT section in the revision to clarify the contributions.
Please let me know, if possible, how we should address concern 1 and if concerns 2-4 can be resolved as suggested in my comments.
@mxwalbert For 1) and 2): I went over the manuscript again and I'm still not sure what the library does and why it exists. For instance, the first phrase in the Statement of need declares that it "... aims to simplify the process of setting up and executing combinatorial voltaic measurements." Why does this process need to be simplified? How does it make it more simple? Try to follow that sentence with "... by doing XYZ, which makes it simpler." or something along those ways. Another example is when you claim that it is simpler to use. How? Why? What makes other options complicated to implement? I'd recommend focusing on its specific functionality and being more specific overall. By doing that it will reflect its simplicity and other claims wihtout the need on emphazising so much on buzzwords. Maybe including something like Key features in the manuscript could help. Is COHESIVM for data colection? Data processing? Visalization? System control? All of the above? I'm genuinely not sure after reading the manuscript. In a few words, my issue is that I don't think the manuscript projects to be sufficiently useful or likely to be cited or used.
For 3): As per the JOSS guidelines, the number of commits is a factor we can use to judge scholarly effort. Any chance you can upload a draft to arXiv? That'd be enough to support the claim of the library being used before. Using the docs to support that is too much of self-sustained argument in my opinion (ie citing a paper in the same paper.)
@enricgrau Thank you for clarification. I will follow your suggestions to phrase a more specific description.
I agree that the number of commits can be used but it's not a requirement in my understanding. This is why I referred to the repository and documentation to base the judgement on them. Unfortunately, the draft is not ready yet, as we plan to finish it until end of November. However, the paper will be a completely separate contribution and only briefly mention the use of this software. So everything within the repository is open for judging the effort of this submission since it is not and will not be published elsewhere.
Submitting author: !--author-handle-->@mxwalbert<!--end-author-handle-- (Maximilian Wolf) Repository: https://github.com/mxwalbert/cohesivm Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@RMeli<!--end-editor-- Reviewers: @ericfell, @enricgrau Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@ericfell & @enricgrau, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @RMeli know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @enricgrau
📝 Checklist for @ericfell