Closed whedon closed 5 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @trallard, it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@trallard, I'm currently using Linux Build instructions to verify manual installation. Mostly looks okay.
However, I can't verify Windows Build, as I don't have access to a Windows system.
Will it be possible for you or @katyhuff to verify building / installation on Windows?
I don't have a windows machine either. I can see about setting up a VM, but generally, confirming installation on the reviewer's platform is sufficient for the purpose of a JOSS review.
I can conduct my review on a Windows machine :grin: if needed but if this works on Linux I would say it satisfies the installation requirements
Currently having issues with various itk header files during compilation. One example:
[ 40%] Building CXX object modules/CMakeFiles/Segmentation.dir/Segmentation/iAWatershedSegmentation.cpp.o In file included from /home/brad/JOSS/workspace/src/Toolkit/FuzzyCMeans/itkFCMClassifierInitializationImageFilter.h:21:0, from /home/brad/JOSS/workspace/src/modules/Segmentation/iAFuzzyCMeans.cpp:23: /home/brad/JOSS/workspace/src/Toolkit/FuzzyCMeans/itkFuzzyClassifierInitializationImageFilter.h:31:10: fatal error: itkMultiThreader.h: No such file or directory
ITK_DIR points to /bin-4.13-itk, as directed per build instructions.
"Set ITK_DIR: /workspace/ITK/bin-4.13.0"
Will see about setting path to header files as needed for Linux build...
I don't have an immediate idea where this error could come from; the include paths should be set up properly by cmake, which doesn't seem to be the case here. As we have tested it mostly on Debian/Ubuntu-based distributions so far, one possible source of problems I could imagine is the Linux distribution - which one are you on? The CMake configuration, generation and build of ITK/VTK succeeded? Also the CMake configuration and generation of open_iA, no errors or warnings there? Edit: One more thing - it's not explicitly noted in the build descriptions, but you are required to keep the ITK/VTK source directories. Alternatively you could probably install the ITK/VTK libraries to some third location, and then use these install locations as ITK_DIR/VTK_DIR, though we haven't really experimented much with this yet.
I'm using 18.04.1 LTS. ITK / VTK compiled fine per the instructions.
After I installed the ITK / VTK builds, CMake picked up those locations. When the open_iA build failed, I switched ITK_DIR to the ITK build directory (then re-ran cmake after a 'make clean').
Best that I start anew, following instructions exactly. Also, I've found that reconfig sometimes fails. Cleaning out the previous build often fixes issues.
Will work through this again on Friday. Thank you for the suggestions. Sorry for any unnecessary delays.
What we've been thinking about for some time is providing a "superbuild", i.e. a CMake project that gathers, configures and builds all the required libraries for open_iA. I'm currently trying to put something together in that direction. This should make it easier to set up a working build environment. Maybe you want to wait on that; though I'm not sure when exactly I will be able to provide this superbuild.
There is a somewhat working version of a superbuild available now, see https://github.com/3dct/open_iA-superbuild ; so far we have tested it only on Ubuntu 18.04.1 and 18.10, it should simplify the build procedure significantly! Hope this helps.
I'll give it another try before Monday as well. Btw, the Blender project provides a "superbuild" script. You might use that as a starting point.
There is a somewhat working version of a superbuild available now, see https://github.com/3dct/open_iA-superbuild ; so far we have tested it only on Ubuntu 18.04.1 and 18.10, it should simplify the build procedure significantly! Hope this helps.
Yes, this should be helpful. Will try it soon!
Hi just dropping by, I will pause this review on my side and resume on the 19th of February. I will aim to get it all completed within that week
Somewhat delayed recently due to circumstances. I have an opportunity this Thursday / Saturday to finish the review on my end, in its entirety.
Superbuild works fine on my system. Trying to verify functionality now...
@codeling, is there an automated test suite & data for each area of functionality claimed in https://github.com/3dct/open_iA/wiki/User-Guide? I don't see this provided as a script in the source tree, or any installation of sample data.
@katyhuff, what degree of coverage is needed for testing functionality? For example, each use case for a given data set could itself be tested with each file type supported: https://github.com/3dct/open_iA/wiki/File-Formats
At this point, we only have limited automated tests available for some library functions. See the iASimpleTester, used in tests of the iAStringHelper and in two tests in the RandomWalker module.
We were planning to include more tests but haven't found the time to do so yet.
The existing tests are run as part of our daily (Mon-Fri) CDash build. Its latest results are available here.
Since we recently added the possibility to run filters in the modules via command line, we are planning to add tests for all filters and file formats there.
Most of our visual analytics tools are heavily GUI- and interaction-based. We do provide test datasets for most of them on our Releases page (the TestDatasets-2018.12.zip attached to our latest release 2018.12). We currently however do not test these automatically, but would be very glad about pointers how we can easily achieve that!
I have completed my review. See checklist. @katyhuff, @trallard, and @codeling, please let me know if you have any questions or concerns.
Sorry for the delay in responding to your question, @behollister .
what degree of coverage is needed for testing functionality? For example, each use case for a given data set could itself be tested with each file type supported: https://github.com/3dct/open_iA/wiki/File-Formats
Generally speaking, my approach to this is to assess whether the scientific functionality of the work is demonstrated and confimed by the tests, so full coverage is certainly not required, but test coverage should confirm the capability claims of the work.
Anyway -- thanks for conducting your review @behollister !
@trallard : I know you estimated review completion around this time. I hope that's still the target timeline! Thank you!
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
PDF failed to compile for issue #1185 with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 16 0 16 0 0 278 0 --:--:-- --:--:-- --:--:-- 280 Error producing PDF. ! Missing $ inserted.
@codeling I'm going through this a bit to bring this submission to completion. Please see: https://github.com/3dct/open_iA/issues/36
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
PDF failed to compile for issue #1185 with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 16 0 16 0 0 307 0 --:--:-- --:--:-- --:--:-- 313 Error producing PDF. ! Missing $ inserted.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
PDF failed to compile for issue #1185 with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 16 0 16 0 0 184 0 --:--:-- --:--:-- --:--:-- 186 Error producing PDF. ! Missing $ inserted.
Hi sorry for the massive delay in this. I tested in a Windows machine and all seems to be working ok. Due to the comments above regarding tests/and the functionality claims of the package I ran my own review and local test based on the user guide.
So far I could not spot anything that was not working as expected or that limited the use of the software for research purposes.
As this develops and to address the concern raised by @behollister
@katyhuff, what degree of coverage is needed for testing functionality? For example, each use case for a given data set could itself be tested with each file type supported: https://github.com/3dct/open_iA/wiki/File-Formats
More examples covering more file formats could be added, though they are not required for acceptance of this piece of software in JOSS.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
PDF failed to compile for issue #1185 with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 16 0 16 0 0 312 0 --:--:-- --:--:-- --:--:-- 313 Error producing PDF. ! Missing $ inserted.
it seems the compilation error persists, but thus far I am happy to recommend acceptance on my reviewer role and apologies for the long time taken to complete this
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
The problem was that I had escaped the underscore in the title (metadata) in paper.md as well. This seems to have been accepted in the previous build from Jan 28, but not anymore...
Thanks @behollister and @trallard for your reviews, and thank you @codeling for your submission!
Thank you @codeling for a strong submission and for engaging actively in the review process! I have looked over the paper, double checked all the DOI links, and have conducted a high level review of the code itself. Everything looks ship-shape to me.
At this point, it'll be good to double check the paper, review any lingering details in your code/readme/etc., and then make an archive of the reviewed software in Zenodo/figshare/other service. Please be sure that the DOI metadata (title, authors, etc.) matches this JOSS submission. Once that's complete, please update this thread with the DOI of the archive, and I'll move forward with accepting the submission! Until then, now is your moment for final touchups!
Great! I will prepare a release tomorrow! I guess it is okay if the version number differs from what was specified when we submitted the first draft (a lot has changed since that version already, so changing that version feels wrong)?
Took me longer than planned to prepare the release, but here it is:
Note: I have updated title, author and description in the zenodo DOI metadata already to match the ones in this submission. Edit: Took a while for me to realize that it's a two-pass thing to edit an entry on zenodo, that you have to first "save the data, then open the record again and "publish" it... now the data should be correct!
Edit: Updated DOI - using DOI referencing all (future) versions of open_iA
So, I'm sorry I didn't get back to you earlier on your previous question.
It needs to be the version that was reviewed (so, your master branch...) particularly if changes have been significant. However, a new submission can be made to review the changes since then, and that review should go faster and will result in another version of the joss DOI.
The release is in our master branch, but at a newer tag (2019.03) than the version that was current when submitting this review (2018.12). Since we use such "dated" version tags, modifying the old 2018.12 version to point to the current release is more or less out of the question.
Changes between 2018.12 and 2019.03 include those done in response to the review comments, but also a few other adaptations (mostly bugfixes).
So I guess we should resubmit the 2019.03 version for a new review in JOSS?
I'm curious then about the proposed workflow and tagging scheme in a JOSS review: If the version specified at submission and the one at the end of the review (in between which potential changes can occur, especially those triggered by the review) should be the same, this implies that the tag needs to be modified, right? While technically this is no problem, this for me goes a bit against the original intention of tags (permanently naming a specific state). Or am I misunderstanding something here? Or do I need to create a zenodo DOI at the time of submission and use that archive at the end?
@katyhuff Should I submit the new version as new submission with a reference to here? Or how should we proceed?
Hi @codeling sorry for the confusion. I think we can move forward with just a little clarification, if you have a moment. This will help me identify a path forward quickly, as I am on vacation and won't be able to look closely at the full diff for a few days. I'll try to answer some questions, and will ask a couple as well.
I'm curious then about the proposed workflow and tagging scheme in a JOSS review: If the version specified at submission and the one at the end of the review (in between which potential changes can occur, especially those triggered by the review) should be the same, this implies that the tag needs to be modified, right?
The DOI created for the JOSS submission should include all commits in the version you originally submitted and the commits comprising changes triggered by the review, which is why we ask for you to create that release at the end of the review, on the branch that we reviewed. Ideally, there shouldn't be lots of other commits folded in that we never saw during review. Some minor changes are generally fine, but I'm confused by your comments about the DOI you made, so I'm not sure what the case is here.
It seems that the intermediate step, required due to your dated releasing system, included folding in other edits that were never involved in the review. The two comments on this seem to be slightly conflicting about the magnitude of those changes:
a lot has changed since that version already, so changing that version feels wrong Changes between 2018.12 and 2019.03 include those done in response to the review comments, but also a few other adaptations (mostly bugfixes).
But, in looking quickly at the release notes, those changes do indeed seem to be mostly bugfixes and targetted improvements:
Regarding your comment:
I'm curious then about the proposed workflow and tagging scheme in a JOSS review... While technically this is no problem, this for me goes a bit against the original intention of tags (permanently naming a specific state).
Usually, submitting authors request a review on master make all triggered changes in that branch, while keeping other major development in a separate branch during review. But, there are a handful of ways to keep major edits out of the doi that is archived alongside the JOSS review (a joss-review branch that is rebased at some point... etc.). The release tag associated with the JOSS review should permanently point to the release that JOSS reviewers reviewed (including changes triggered by the review, because those were part of the review)
So I guess we should resubmit the 2019.03 version for a new review in JOSS? If I and the reviewers can quickly agree that the minor edits and bugfixes do not need their own independent review, then no. But, if those changes are major, then, yes.
Final comment: I'm also somewhat confused about this statement -- can you clarify what you mean here :
Edit: Updated DOI - using DOI referencing all (future) versions of open_iA
Submitting author: @codeling (Bernhard Fröhler) Repository: https://github.com/3dct/open_iA Version: 2019.03 Editor: @katyhuff Reviewer: @trallard, @behollister Archive: 10.5281/zenodo.2591999
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@trallard & @behollister, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @katyhuff know.
✨ Please try and complete your review in the next two weeks ✨
Review checklist for @trallard
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?Review checklist for @behollister
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?