Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.21 s (280.2 files/s, 364097.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 10 476 935 1209
MATLAB 15 207 137 1030
Jupyter Notebook 10 0 68671 887
Markdown 13 143 8 733
TeX 1 42 0 583
YAML 3 6 14 58
TOML 1 4 0 55
reStructuredText 3 33 50 38
DOS Batch 1 8 1 26
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 58 923 69823 4628
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1207
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0021-9290(03)00257-4 is OK
- 10.1016/S8756-3282(02)00736-6 is OK
- 10.1016/j.jbiomech.2020.110205 is OK
- 10.1115/1.2746368 is OK
- 10.1016/j.jbiomech.2008.02.032 is OK
- 10.1115/1.2146001 is OK
- 10.1007/s10237-019-01254-x is OK
- 10.1371/journal.pone.0180151 is OK
- 10.1007/978-3-642-29843-1_56 is OK
- 10.1016/j.parco.2011.08.001 is OK
- 10.1016/j.jmbbm.2020.104190 is OK
- 10.1016/j.jmbbm.2018.06.022 is OK
- 10.1016/j.medntd.2021.100088 is OK
- 10.1115/1.4028968 is OK
- 10.1007/s10237-007-0109-7 is OK
- 10.1016/j.bonr.2022.101179 is OK
- 10.1016/j.jmbbm.2022.105303 is OK
- 10.1101/2021.11.30.470675 is OK
- 10.1016/j.cmpb.2022.106764 is OK
- 10.1016/j.jbiomech.2003.10.002 is OK
- 10.1007/s10237-008-0125-2 is OK
- 10.5281/zenodo.1173115 is OK
- 10.5281/zenodo.5564818 is OK
- 10.1107/S160057751401604X is OK
- 10.1002/adem.201900080 is OK
- 10.1016/j.jmbbm.2016.11.020 is OK
- 10.1080/10255840701848756 is OK
- 10.1080/10255840410001656408 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.7717/peerj.453 is OK
MISSING DOIs
- 10.1109/tsmc.1979.4310076 may be a valid DOI for title: A threshold selection method from gray-level histograms
- 10.1109/tsmc.1978.4310039 may be a valid DOI for title: Picture thresholding using an iterative selection method
INVALID DOIs
- https://doi.org/10.1002/cnm.2941 is INVALID because of 'https://doi.org/' prefix
@gianthk A few issues as I work through the review
@jacobmerson thank you for the revisions! This is my first JOSS paper. how does it work? can I start working on these already, commit changes and solve the issues you opened?
1. Are there tests that can be run either automatically or manually (with documented steps?). I do not see this
@jacobmerson the examples cover all steps of software functionality. we will upload the test dataset to reproduce the examples. on top of this, are automated tests mandatory?
@jacobmerson thank you for the revisions! This is my first JOSS paper. how does it work? can I start working on these already, commit changes and solve the issues you opened?
Yes! Please address any issues that we bring up through the review process as we go. One nice thing about JOSS is that things can be iterative.
@jacobmerson the examples cover all steps of software functionality. we will upload the test dataset to reproduce the examples. on top of this, are automated tests mandatory?
Please take a look at the review criteria here. You do provide a lot of examples. However, based on my interpretation of the review criteria these examples alone are not enough. You need to have some mechanism for me to check the correctness of the outputs either manual or automated (preferred). You don't necessarily need unit testing, but you do need integration tests.
In other words, given some set of input files and parameters, do I get the correct input files for Abaqus/Calculix out? As much of your public API as possible should have some form of testing.
@jacobmerson OK got it. we will need some time to address all issues
@gianthk I agree with @jacobmerson . In my opinion, the paper and examples read and execute well. The pip installation is quite nice. However, there is no test validation (automatic) if the elements are properly implemented. I suggest including a few that shows that Hexa, Tetra, Quad, and so on work well. On the same line, it will be nice to compare to some other software to see performance and validation of results. Abaqus and Ansys could help if a license is available, otherwise FreeFEM, FEniCs, Deal.II are opensource possibilities
@jacobmerson @dankusanovic here we go thank you very much for reviewing all ciclope functionalities! It took us some time but it was a chance to find and debug more issues with installation.
project.toml
pygalmesh
and meshio
. now pip install ciclope
forces the installation of the correct versions: pygalmesh==0.10.6
and meshio==5.0.0
[X] we uploaded a small dataset (10x10x10 image) for unit tests
[X] unit tests implemented for the key methods of ciclope.core
:
pymcubes
and pygalmesh
methods[X] How to perform unit tests described in the readme as well as readthedocs
For both voxelFE
and tetraFE
, the outputs of mesh generation are asserted versus element types and non-empty mesh fields. Similarly, FE input files generated on the synthetic 10x10x10 dataset are checked to assert existence of FE keywords (nodes, elements, analysis definition..). In this way we verify that the written FE files are correct.
@dankusanovic ciclope is not and does not provide an FE solver. We think that the testing and comparison of different FE solvers, or of open-source vs proprietary solvers is out of the scope of our work. We provide unit tests for the functionality of ciclope (generation of FE input files)
the new release v1.3.1 should incorporate all the items discussed in the review.
@jacobmerson @dankusanovic , could you please review the changes reported by @gianthk and update your checklists accordingly, or raise new issues if you have any?
@danasolav I'm hoping to get this done within the next week.
@gianthk
* [ ] Update the paper with the appropriate statements for human/animal testing? [statement for animal/human samples for JOSS paper gianthk/ciclope#14](https://github.com/gianthk/ciclope/issues/14)
Yes we did. The latest paper version is in this Action's run.
The microCT bone dataset used in the examples is part of the public collection of the Living Human Digital Library (LHDL) project, a project financed by the European Commission (project number: FP6-IST 026932). Human tissues in the LHDL project were collected according to the body donation program of Universitè Libre de Bruxelles (ULB), partner of the project.
@editorialbot generate paper
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot commands
Hello @jacobmerson, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@danasolav can you help me understand if the statements made by the authors about human/animal samples are adequate for JOSS?
They provide a reference to the public access database that they retrieved the imaging samples from (and did not collect the samples themselves). However, the JOSS guidelines require
The manuscript must include complete reporting of the data, including the ways that the study was approved and relevant details of the sample. Authors are required to report either the ARRIVE or PREPARE guidelines for animal research in the submission.
However they do not mention the ARRIVE
or PREPARE
guidelines. From a pragmatic standpoint it seems that referencing the public database is adequate since they do not collect the samples themselves.
Just waiting for @jacobmerson inquiry. On my side, if that is appropriate, then the paper is ready to go. The list has been updated accordingly.
@gianthk @danasolav @dankusanovic Just went through the JOSS requirements again and it does specify "original" research needs to include those guidelines however, with their paper edits it is clear that this is not original animal/human research.
They should be all set.
@gianthk A few comments on the paper. I don't consider these to be blockers, but they might improve the quality of your paper:
Line 33-34: you may want to indicate how standard microCT datasets can be converted to numpy arrays. This discussion may be too much for the paper, but you should have this in the documentation. Or, at least a description of the structure of the expected inputs. A t x n x n
array where t
the number of frames through the thickness and n
is the number of pixels across each edge.
Line 45: you should say what a .inp
file is. i.e., say that it is the input file format for Abaqus/Calculix
Line 50: why is material mapping bolded?
Line 56-65: You mention that there are components you rely on for visualization, but have not included any description of what visualization capabilities exist within Ciclope
@gianthk @danasolav @dankusanovic My review is complete. I would like to recommend this work for publication. This paper provides open source tools to aid in the construction of finite element models from microCT image stack data including basic tools for segmentation, as well as the construction of input files for Abaqus and Calculix. These types of pipelines are challenging to construct, but lay the foundation for important advancements in the use of patient specific data in research and clinical settings.
I have suggested some minimal clarification edits to the author, however I don't consider these to be required.
@jacobmerson thanks for the comments above I will go through the minor edits tomorrow
Line 33-34: you may want to indicate how standard microCT datasets can be converted to numpy arrays
- [X] github
- [X] readthedocs
Line 45: you should say what a
.inp
file is. i.e., say that it is the input file format for Abaqus/Calculix
- [X] done
Line 50: why is material mapping bolded?
- [X] bold removed
Line 56-65: You mention that there are components you rely on for visualization, but have not included any description of what visualization capabilities exist within Ciclope
- [X] add link to jupyter notebooks in the paper
- [X] ..on readthedocs
- [X] ..on the github readme
@dankusanovic please check your last unchecked box: Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.
@editorialbot generate pdf
Dear @danasolav I am sorry for the delay; I just completely forgot it! It is done now. Thanks for the reminder
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@gianthk, please refer to my comments regarding the paper in this issue.
@danasolav please see all our replies to your open issue and let us know if you have more corrections/suggestions
@danasolav did you have the time to review our replies to your open issue?
@gianthk sorry for the delay, I closed the issue now.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0021-9290(03)00257-4 is OK
- 10.1016/S8756-3282(02)00736-6 is OK
- 10.1016/j.jbiomech.2020.110205 is OK
- 10.1115/1.2746368 is OK
- 10.1016/j.jbiomech.2008.02.032 is OK
- 10.1115/1.2146001 is OK
- 10.1007/s10237-019-01254-x is OK
- 10.1371/journal.pone.0180151 is OK
- 10.1002/cnm.2941 is OK
- 10.1007/978-3-642-29843-1_56 is OK
- 10.1016/j.parco.2011.08.001 is OK
- 10.1016/j.jmbbm.2020.104190 is OK
- 10.1016/j.jmbbm.2018.06.022 is OK
- 10.1016/j.medntd.2021.100088 is OK
- 10.1115/1.4028968 is OK
- 10.1007/s10237-007-0109-7 is OK
- 10.1016/j.bonr.2022.101179 is OK
- 10.1016/j.jmbbm.2022.105303 is OK
- 10.1101/2021.11.30.470675 is OK
- 10.1016/j.cmpb.2022.106764 is OK
- 10.1016/j.jbiomech.2003.10.002 is OK
- 10.1007/s10237-008-0125-2 is OK
- 10.5281/zenodo.1173115 is OK
- 10.5281/zenodo.5564818 is OK
- 10.1107/S160057751401604X is OK
- 10.1002/adem.201900080 is OK
- 10.1016/j.jmbbm.2016.11.020 is OK
- 10.1080/10255840701848756 is OK
- 10.1080/10255840410001656408 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.7717/peerj.453 is OK
MISSING DOIs
- 10.1109/tsmc.1979.4310076 may be a valid DOI for title: A threshold selection method from gray-level histograms
- 10.1109/tsmc.1978.4310039 may be a valid DOI for title: Picture thresholding using an iterative selection method
- 10.1109/tsmc.1979.4310076 may be a valid DOI for title: A threshold selection method from gray-level histograms
INVALID DOIs
- None
@gianthk please check the missing DOI and consider adding them to your references.
@gianthk @jacobmerson @dankusanovic Thank you all for your good work and for recommending acceptance of this submission. @gianthk, after correcting the missing DOIs, please issue a new tagged release and archive it (on Zenodo, figshare, or other). Then, please post the version number and archive DOI here.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0021-9290(03)00257-4 is OK
- 10.1016/S8756-3282(02)00736-6 is OK
- 10.1016/j.jbiomech.2020.110205 is OK
- 10.1115/1.2746368 is OK
- 10.1016/j.jbiomech.2008.02.032 is OK
- 10.1115/1.2146001 is OK
- 10.1007/s10237-019-01254-x is OK
- 10.1371/journal.pone.0180151 is OK
- 10.1002/cnm.2941 is OK
- 10.1007/978-3-642-29843-1_56 is OK
- 10.1016/j.parco.2011.08.001 is OK
- 10.1016/j.jmbbm.2020.104190 is OK
- 10.1016/j.jmbbm.2018.06.022 is OK
- 10.1016/j.medntd.2021.100088 is OK
- 10.1115/1.4028968 is OK
- 10.1007/s10237-007-0109-7 is OK
- 10.1016/j.bonr.2022.101179 is OK
- 10.1016/j.jmbbm.2022.105303 is OK
- 10.1101/2021.11.30.470675 is OK
- 10.1016/j.cmpb.2022.106764 is OK
- 10.1016/j.jbiomech.2003.10.002 is OK
- 10.1109/tsmc.1979.4310076 is OK
- 10.1109/tsmc.1978.4310039 is OK
- 10.1007/s10237-008-0125-2 is OK
- 10.5281/zenodo.1173115 is OK
- 10.5281/zenodo.5564818 is OK
- 10.1107/S160057751401604X is OK
- 10.1002/adem.201900080 is OK
- 10.1016/j.jmbbm.2016.11.020 is OK
- 10.1080/10255840701848756 is OK
- 10.1080/10255840410001656408 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.7717/peerj.453 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@danasolav the new release is v1.3.4. Zenodo DOI: https://doi.org/10.5281/zenodo.7716943
@editorialbot set https://doi.org/10.5281/zenodo.7716943 as archive
Submitting author: !--author-handle-->@gianthk<!--end-author-handle-- (gianluca iori) Repository: https://github.com/gianthk/ciclope Branch with paper.md (empty if default branch): Version: v1.3.4 Editor: !--editor-->@danasolav<!--end-editor-- Reviewers: @dankusanovic, @jacobmerson Archive: 10.5281/zenodo.7716943
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@dankusanovic & @jacobmerson, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danasolav know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @jacobmerson
📝 Checklist for @dankusanovic