Closed whedon closed 7 years ago
Hello human, I'm @whedon. I'm here to help you with some common editorial tasks for JOSS.
For a list of things I can do to help you, just type:
@whedon commands
@whedon commands
Here are some things you can ask me to do:
# List all of Whedon's capabilities
@whedon commands
# Assign a GitHub user as the reviewer of this submission
@whedon assign @username as reviewer
# List the GitHub usernames of the JOSS editors
@whedon list editors
# List of JOSS reviewers together with programming language preferences and domain expertise
@whedon list reviewers
# Change editorial assignment
@whedon assign @username as editor
# Set the software archive DOI at the top of the issue e.g.
@whedon set 10.0000/zenodo.00000 as archive
# Open the review issue
@whedon start review
:construction: Important :construction:
This is all quite new. Please make sure you check the top of the issue after running a @whedon command (you might also need to refresh the page to see the issue update).
@whedon list reviewers
Here's the current list of JOSS reviewers: https://github.com/openjournals/joss/blob/master/docs/reviewers.csv
Hello! @arokem @rougier @sealhuang @oesteban @effigies @nirum @ahwillia - you are all in neuroimage/neuroscience + Python - Is there anyone interested in peer-reviewing bruker2nifti as submitted to JOSS? It is not fundamental, but it may help if you have worked with Bruker ParaVision software/formats before.
@neurolabusc @matthew-brett do you know someone potentially interested in peer-reviewing the code?
Thanks!
I don't have any experience with Bruker, so if any other potential reviewers do, you should probably prioritize them over me. That said, as long as there are some good and bad Bruker files (for bad, ideally including an explanation of what's wrong, such as malformed header, truncated data segment, etc.) I think I could review this.
Other than Matthew, I would think of @andrewjanke
@mfroeling @jhuguetn (Jordi can you also ask Paul Groot) want to review this submission for JOSS? Or do you know anybody that could help? If you accept we'll open up a review issue. The review process follows our review guidelines. Let me know if you have any questions.
@whedon assign @Kevin-Mattheus-Moerman as editor
OK, the editor is @Kevin-Mattheus-Moerman
@SebastianoF can you provide info in relation to @effigies 's question, i.e. do you have some tests/example Bruker files a reviewer could play with?
@Kevin-Mattheus-Moerman @effigies there are bruker files a reviewer can play with, in the test_data folder. They are a good starting point, lightweight and are the only one I could provide non-copyrighted at the moment. This may make the test self-referential. Having a non-copyrighted external (and so unbiased) dataset is not that straightforward. I heard offline that @neurolabusc had in mind to create a benchmarking platform with a wider dataset available, where the tests will be unbiased. He already pointed out a subtle issue of frame inversion that happens in ParaVision version 5.1, raised in the issue #11.
A nice repository of images is here https://gitlab.com/naveau/bruker2nifti_qa/tree/master I think a challenge is that while the Bruker file format is thoroughly documented, the files created by Bruker software often omit required fields from some of the files. You are free to compare the results of this code versus my own https://github.com/neurolabusc/Bru2Nii Since my code pre-dates this project (origins in 1990's) it tends to rely more on older Bruker files, and uses different heuristics to deal with missing fields. Of all my projects, this is the one I am least proud. Poorly conceived formats that are poorly executed necessarily lead to kludgey code. I think it is great that Sebastiano has created a clean-sheet implementation.
It's not exactly my expertise but I can (try to) help with review if needed.
@rougier Sure all help is welcome. I'm going to wait a couple of days to attract some more reviewers and then I'll open the formal review issue.
I'm not entirely sure what is going on here, but I think I like what is being proposed. A code review for open journal publication of software?
I have a number of Bruker files here dating back to the early 90's, but as Chris points out the Bruker file format is rather like DICOM, it's a specification (effectively YAML but older) and how it has been used to determine orientation and other things from the acqp file is dependant on time, version, the author of the pulse sequence and a number of other things. So 100% coverage is going to be hard.
Also realise that orientation is not guaranteed if the pulse sequence author hasn't done things "right", for example the phase read direction might be reversed and acqp will not record this.
And then there is PV6...
Short version: Let me know if I can help although not sure of entirely what is being asked just yet.
@andrewjanke Hello! Yes, I agree with all what you said about Bruker file format!
Of course there are two considerations: 1) in the ideal world Bruker provides the converter; 2) a single user may not have a wide enough dataset to cover all the options (usually a single user has the same acquisition protocol, same ParaVision version and same lab expert).
I started writing the Bruker converter to convert data for my project, acquired both with PV5.1 and PV6.0.1, and with the need of exploring the outcome of some different pulse sequences for tests. After much headache, and much help from experts around (Bernard Siow, @neurolabusc, @matthew-brett, @dzhoshkun, Willy Gsell and others), I realised that bruker2nifti had reached a somehow wide coverage, and could have been of some help for other users ("if I had it when I started my project..." and alike). Vice versa, other users having data acquired with different scanner settings / bruker version / pulse sequence than mine, may provide further improvement in coverage and robustness.
Submitting the code to JOSS, was maybe the best way to having it peer-reviewed, while increasing the number of users / datasets and therefore increasing the percentage of cases covered.
Please let @Kevin-Mattheus-Moerman know if it may be of interest for you to take a look at the code, test it on your data and raise issues where you find problems (short version: become a reviewer). Thanks!!
Many thanks to all those who have replied so far. We would also very much appreciate any feedback and suggestions about the source code itself.
Hi all, I think most of the people in this thread are better than me for this review, so I'll drop out. But thanks for the invite!
Best wishes to @sebastianof and this submission :)
@rougier do you mind if we open the official review issue with you as main reviewer? I'll continue to invite additional reviewers though.
Ok. I can try to review but my review will be on a technical level mostly.
@rougier That is fine we can get started on the technical side first and I will recruit others to help with other aspects shortly
@whedon assign @rougier as reviewer
OK, the reviewer is @rougier
@whedon start review
You didn't say the magic word! Try this:
@whedon start review magic-word=bananas
@whedon start review magic-word=bananas
OK, I've started the review over in https://github.com/openjournals/joss-reviews/issues/354. Feel free to close this issue now!
Submitting author: @SebastianoF (Sebastiano Ferraris) Repository: https://github.com/SebastianoF/bruker2nifti Version: 1.0.0 Editor: @Kevin-Mattheus-Moerman Reviewer: @rougier
Author instructions
Thanks for submitting your paper to JOSS @SebastianoF. The JOSS editor (shown at the top of this issue) will work with you on this issue to find a reviewer for your submission before creating the main review issue.
@SebastianoF if you have any suggestions for potential reviewers then please mention them here in this thread. In addition, this list of people have already agreed to review for JOSS and may be suitable for this submission.
Editor instructions
The JOSS submission bot @whedon is here to help you find and assign reviewers and start the main review. To find out what @whedon can do for you type: