Closed whedon closed 8 years ago
/ cc @openjournals/joss-reviewers - would anyone be willing to review this submission?
If you would like to review this submission then please comment on this thread so that others know you're doing a review (so as not to duplicate effort). Something as simple as :hand: I am reviewing this
will suffice.
Reviewer instructions
Any questions, please ask for help by commenting on this issue! 🚀
maybe a tentative :hand: I am reviewing this? If nobody else speaks up this week, I will take it on to do a review next week (July 5-8).
Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).
I don't know anything about this specific work of his.
Thanks, Jeremy! On Wed, Jun 29, 2016 at 6:29 PM Jeremy G. Kahn notifications@github.com wrote:
maybe a tentative ✋ I am reviewing this? If nobody else speaks up this week, I will take it on to do a review next week (July 5-8).
Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).
I don't know anything about this specific work of his.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-229508581, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kGbjb4QWiBEz_wcVprg2Bh3mYVGOks5qQvG3gaJpZM4JARd3 .
Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).
OK that sounds great @jkahn 👍
All the outer formatting looks good.
Might be nice to find a DOI for Loukina 2015 but I can't figure out how to get DOIs from ACLweb.
I'm not 100% clear what the statement of actual need is here. I get that this is a useful tool within the ETS, but I'm not certain about what the contribution of this package is, and who the researchers (outside of the ETS itself) would be that would find this to be a useful tool in hand.
I'm not saying that those researchers don't exist -- I might even be one of them -- but I don't think the statement of need clearly reflects what a imaginary researcher Professor X would use these tools for. Perhaps a tutorial walking through Professor X's thought process would help clarify what problem this solves -- and if the tutorial was part of the documentation, so much the better.
I suspect is meeting >1 need and some researchers may have only a subset of those needs, and that is okay, but those are not clear. (See upcoming comment about multiply-situated work.)
There are at least three different command-line tools (endpoints, in the Python jargon) that all seem to take the same argument structure (a config file) but presumably have different formats expected in the config files. There's exactly one example use, scraped from a Kaggle competition, but it only uses one of the CLI endpoints (rsmtool
); the others (e.g. rsmeval
and rsmcompare
) don't have sample usages.
This doesn't help your statement of need much, either.
Please see the "available documentation" section in the main README. All the config files for all four tools format are fully documented.
There's only example, that's true. That's because we expect the "rsmtool" endpoint to be the most commonly used. We can certainly make that clearer. On Thu, Jul 7, 2016 at 7:12 PM Jeremy G. Kahn notifications@github.com wrote:
Undocumented entry points confuse new users
There are at least three different command-line tools (endpoints, in the Python jargon) that all seem to take the same argument structure (a config file) but presumably have different formats expected in the config files. There's exactly one example use, scraped from a Kaggle competition, but it only uses one of the CLI endpoints (rsmtool); the others (e.g. rsmeval and rsmcompare) don't have sample usages.
This doesn't help your statement of need much, either.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-231234600, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kGR4VQittBVwyCNcyuewuj-uuqNhks5qTYfcgaJpZM4JARd3 .
pandas.DataFrame
manipulator APIs from the input and output formatsAs far as I can tell, the format of the input features is also undocumented. I don't have a clear picture of how I might (as an external developer) go about creating the datasets of "features" (a hugely overloaded term, further overloaded in the limited documentation provided here) applied to these competitions. Furthermore, I don't understand from the documentation what aspects of those files are being displayed in the resulting HTML generation.
I wonder if a clear difference between dataframe format and on-disk storage format would clarify things here. (I can imagine:
I'd thus like to see a separation among the following concerns, within the documentation, all of which seem to be at play here:
Okay, some more digging around in doc/
has resolved some of these (for example, the feature formats in doc/feature_file.md
) but I still think there are too many control surfaces buried in the config file to get a clear picture of what the general uses are.
Is this a tool for new feature development? For comparing human raters? for comparing human raters to existing features? for comparing existing features to each other? For designing new notebooks that have lots of the existing work already done?
All of the above and more?
I think this is a configuration-based approach to desktop evaluation of how different schemes for combining numeric features improve (or hurt) the correlation with human scorers, but as such it's practically an IDE, which is why I am suggesting a clearer breakdown of the sub-responsibilities.
The authors make no particular performance claims, and the software runs in reasonable time (a few seconds) on the sample data and produces plausible-looking HTML documentation of correlations among users and features. I'm happy to check off the corresponding boxes there.
Recommendation: accept - Minor revisions. I think it's clear that there's a docs problem here:
.. autodoc
-including api.rst
file)Separately, I have a few further quibbles that should not block publication but should probably be
.py
files should probably have a coverage
and a linter-style test run at least between version revisions.requirements.txt
file is included). It'd be nice to have sdist
tarballs available on PyPI -- this would even be compatible with conda installations, after all, and the code defined in this package does not include any non-Python code AFAICT. (many of the declared dependencies require careful work with compilation, but that itself should be outside the scope of this project.)Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.
Indeed. Thanks for the careful review, Jeremy! On Fri, Jul 8, 2016 at 13:33 Anastassia Loukina notifications@github.com wrote:
Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-231422466, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kMYYD6LzFaRQll__9eaZx6axZooDks5qTonXgaJpZM4JARd3 .
Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.
💯 - yes thanks for this excellent review @jkahn. @aloukina & @desilinguist - please let me know when you've had a chance to update your submission based upon @jkahn's feedback.
Hi @arfon and @jkahn,
Thanks again for the very useful review! It helped us come up with something a lot better, we think.
We have just released v5.1.0 of RSMTool that addresses the suggestions made in the review. Specifically:
nosetests
. One particular warnings remains which actually does indicate the use of about-to-be-deprecated code in a related package for which I have filed an issue. This should be fixed in the next release.pip
compatibility and I think I have a way to get it working in the next release when we update one of the packages which has since become more wheel-friendly. Well, I am delighted to see this. @desilinguist and @aloukina, the new documentation is actually enjoyable to read, with well thought out hyperlinking and walkthroughs describing real user scenarios.
It's much less of a stretch for me to imagine a non-ETS researcher using this tool now, which was the unarticulated heart of my documentation objections before.
I'm glad to hear you're exploring pip/wheel installations and I hope you'll publish wheels or sdist tarballs on PyPi periodically as part of your release cycle. I give this a :thumbsup: and defer to the editors as to when/if/how you should mint a new DOI.
I give this a 👍 and defer to the editors as to when/if/how you should mint a new DOI.
Excellent, thanks @jkahn.
@desilinguist - is there an associated DOI for the v5.1.0
release. If so, please add the DOI as a comment here. We can then move forward to accept and publish.
Perfect. Thanks!
@desilinguist - your paper is now accepted into JOSS and you DOI is http://dx.doi.org/10.21105/joss.00033 🎉 🚀 💥
Thanks for the great review @jkahn
Submitting author: @desilinguist (Nitin Madnani) Repository: https://github.com/EducationalTestingService/rsmtool Version: v5.1.0 Editor: @arfon Reviewer: @jkahn
Archive: 10.5281/zenodo.58851
Status
Status badge code:
Reviewer questions
Conflict of interest
General checks
Functionality
Documentation
Software paper
Paper PDF: 10.21105.joss.00033.pdf
paper.md
file include a list of authors with their affiliations?