openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
721 stars 38 forks source link

[REVIEW]: RSMTool: a Python Package for facilitating research on automated scoring models #33

Closed whedon closed 8 years ago

whedon commented 8 years ago

Submitting author: @desilinguist (Nitin Madnani) Repository: https://github.com/EducationalTestingService/rsmtool Version: v5.1.0 Editor: @arfon Reviewer: @jkahn
Archive: 10.5281/zenodo.58851

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/fbc649c17d45074d92ac21084aaa6209"><img src="http://joss.theoj.org/papers/fbc649c17d45074d92ac21084aaa6209/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/fbc649c17d45074d92ac21084aaa6209/status.svg)](http://joss.theoj.org/papers/fbc649c17d45074d92ac21084aaa6209)

Reviewer questions

Conflict of interest

Paper PDF: 10.21105.joss.00033.pdf

arfon commented 8 years ago

/ cc @openjournals/joss-reviewers - would anyone be willing to review this submission?

If you would like to review this submission then please comment on this thread so that others know you're doing a review (so as not to duplicate effort). Something as simple as :hand: I am reviewing this will suffice.

Reviewer instructions

Any questions, please ask for help by commenting on this issue! 🚀

jkahn commented 8 years ago

maybe a tentative :hand: I am reviewing this? If nobody else speaks up this week, I will take it on to do a review next week (July 5-8).

Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).

I don't know anything about this specific work of his.

desilinguist commented 8 years ago

Thanks, Jeremy! On Wed, Jun 29, 2016 at 6:29 PM Jeremy G. Kahn notifications@github.com wrote:

maybe a tentative ✋ I am reviewing this? If nobody else speaks up this week, I will take it on to do a review next week (July 5-8).

Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).

I don't know anything about this specific work of his.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-229508581, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kGbjb4QWiBEz_wcVprg2Bh3mYVGOks5qQvG3gaJpZM4JARd3 .

arfon commented 8 years ago

Full disclosure: Nitin (the submitting author) and I are friendly (and see each other, mostly at conferences, about every 18 months).

OK that sounds great @jkahn 👍

jkahn commented 8 years ago

General checks and references

All the outer formatting looks good.

Might be nice to find a DOI for Loukina 2015 but I can't figure out how to get DOIs from ACLweb.

jkahn commented 8 years ago

Statement of need

I'm not 100% clear what the statement of actual need is here. I get that this is a useful tool within the ETS, but I'm not certain about what the contribution of this package is, and who the researchers (outside of the ETS itself) would be that would find this to be a useful tool in hand.

I'm not saying that those researchers don't exist -- I might even be one of them -- but I don't think the statement of need clearly reflects what a imaginary researcher Professor X would use these tools for. Perhaps a tutorial walking through Professor X's thought process would help clarify what problem this solves -- and if the tutorial was part of the documentation, so much the better.

I suspect is meeting >1 need and some researchers may have only a subset of those needs, and that is okay, but those are not clear. (See upcoming comment about multiply-situated work.)

jkahn commented 8 years ago

Undocumented entry points confuse new users

There are at least three different command-line tools (endpoints, in the Python jargon) that all seem to take the same argument structure (a config file) but presumably have different formats expected in the config files. There's exactly one example use, scraped from a Kaggle competition, but it only uses one of the CLI endpoints (rsmtool); the others (e.g. rsmeval and rsmcompare) don't have sample usages.

This doesn't help your statement of need much, either.

desilinguist commented 8 years ago

Please see the "available documentation" section in the main README. All the config files for all four tools format are fully documented.

There's only example, that's true. That's because we expect the "rsmtool" endpoint to be the most commonly used. We can certainly make that clearer. On Thu, Jul 7, 2016 at 7:12 PM Jeremy G. Kahn notifications@github.com wrote:

Undocumented entry points confuse new users

There are at least three different command-line tools (endpoints, in the Python jargon) that all seem to take the same argument structure (a config file) but presumably have different formats expected in the config files. There's exactly one example use, scraped from a Kaggle competition, but it only uses one of the CLI endpoints (rsmtool); the others (e.g. rsmeval and rsmcompare) don't have sample usages.

This doesn't help your statement of need much, either.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-231234600, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kGR4VQittBVwyCNcyuewuj-uuqNhks5qTYfcgaJpZM4JARd3 .

jkahn commented 8 years ago

Separating the pandas.DataFrame manipulator APIs from the input and output formats

As far as I can tell, the format of the input features is also undocumented. I don't have a clear picture of how I might (as an external developer) go about creating the datasets of "features" (a hugely overloaded term, further overloaded in the limited documentation provided here) applied to these competitions. Furthermore, I don't understand from the documentation what aspects of those files are being displayed in the resulting HTML generation.

I wonder if a clear difference between dataframe format and on-disk storage format would clarify things here. (I can imagine:

I'd thus like to see a separation among the following concerns, within the documentation, all of which seem to be at play here:

jkahn commented 8 years ago

Okay, some more digging around in doc/ has resolved some of these (for example, the feature formats in doc/feature_file.md) but I still think there are too many control surfaces buried in the config file to get a clear picture of what the general uses are.

Is this a tool for new feature development? For comparing human raters? for comparing human raters to existing features? for comparing existing features to each other? For designing new notebooks that have lots of the existing work already done?

All of the above and more?

I think this is a configuration-based approach to desktop evaluation of how different schemes for combining numeric features improve (or hurt) the correlation with human scorers, but as such it's practically an IDE, which is why I am suggesting a clearer breakdown of the sub-responsibilities.

jkahn commented 8 years ago

Miscellaneous

The authors make no particular performance claims, and the software runs in reasonable time (a few seconds) on the sample data and produces plausible-looking HTML documentation of correlations among users and features. I'm happy to check off the corresponding boxes there.

jkahn commented 8 years ago

Recommendation: accept - Minor revisions. I think it's clear that there's a docs problem here:

Separately, I have a few further quibbles that should not block publication but should probably be

aloukina commented 8 years ago

Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.

desilinguist commented 8 years ago

Indeed. Thanks for the careful review, Jeremy! On Fri, Jul 8, 2016 at 13:33 Anastassia Loukina notifications@github.com wrote:

Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/33#issuecomment-231422466, or mute the thread https://github.com/notifications/unsubscribe/AAJ3kMYYD6LzFaRQll__9eaZx6axZooDks5qTonXgaJpZM4JARd3 .

arfon commented 8 years ago

Thank you for such a detailed review, Jeremy! We'll go over your suggestions with Nitin.

💯 - yes thanks for this excellent review @jkahn. @aloukina & @desilinguist - please let me know when you've had a chance to update your submission based upon @jkahn's feedback.

desilinguist commented 8 years ago

Hi @arfon and @jkahn,

Thanks again for the very useful review! It helped us come up with something a lot better, we think.

We have just released v5.1.0 of RSMTool that addresses the suggestions made in the review. Specifically:

  1. The documentation has been completely overhauled and is now hosted on readthedocs. It now includes a very clear statement of need, several tutorials, as well as browsable API documentation.
  2. (Almost all) expected warnings are now suppressed when running nosetests. One particular warnings remains which actually does indicate the use of about-to-be-deprecated code in a related package for which I have filed an issue. This should be fixed in the next release.
  3. Several stylistic issues fixed using pep8 and pyflakes except for a couple that are documented in the PRs.
  4. Code coverage is now automatically computed via coveralls.io.
  5. I have started working on pip compatibility and I think I have a way to get it working in the next release when we update one of the packages which has since become more wheel-friendly.
jkahn commented 8 years ago

Well, I am delighted to see this. @desilinguist and @aloukina, the new documentation is actually enjoyable to read, with well thought out hyperlinking and walkthroughs describing real user scenarios.

It's much less of a stretch for me to imagine a non-ETS researcher using this tool now, which was the unarticulated heart of my documentation objections before.

I'm glad to hear you're exploring pip/wheel installations and I hope you'll publish wheels or sdist tarballs on PyPi periodically as part of your release cycle. I give this a :thumbsup: and defer to the editors as to when/if/how you should mint a new DOI.

arfon commented 8 years ago

I give this a 👍 and defer to the editors as to when/if/how you should mint a new DOI.

Excellent, thanks @jkahn.

@desilinguist - is there an associated DOI for the v5.1.0 release. If so, please add the DOI as a comment here. We can then move forward to accept and publish.

desilinguist commented 8 years ago

Hi @arfon, here is the DOI for the v5.1.0 release

DOI

Please let me know if that's okay or if I need to do something else.

arfon commented 8 years ago

Perfect. Thanks!

@desilinguist - your paper is now accepted into JOSS and you DOI is http://dx.doi.org/10.21105/joss.00033 🎉 🚀 💥

Thanks for the great review @jkahn