Open flizhou opened 4 years ago
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).For packages co-submitting to JOSS
- [ ] The package has an obvious research application according to JOSS's definition
The package contains a
paper.md
matching JOSS's requirements with:
- [ ] A short summary describing the high-level functionality of the software
- [ ] Authors: A list of authors with their affiliations
- [ ] A statement of need clearly stating problems the software is designed to solve and its target audience.
- [ ] References: with DOIs for all those that have one (e.g. papers, datasets, software).
Estimated hours spent reviewing:
Hey, guys! You've tackled a complicated topic in this 3 weeks, great job! I found your repository structure quite easy to understand and README is nicely done. Usage examples provide enough information to be able to use package with ease, and even from just looking at them I can understand what you're trying to achieve with this package. Great job with test functions and erroneous input handling!
Just as disclaimer: One nuance about dependencies you have in the package. I did run into issues with 'rlang', 'vctrs' and some other packages, that you use in your project. I believe it to be just a Windows OS issue, but heads up, because it makes life quite complicated for potential users of your package.
Some possible room for improvements:
confusion_matrix
can use some code cleaning inside in terms of styling and commenting.Overall, great project! You've done a great job withing this 3 weeks we had! Feel free to reach out if you need more information or feedback!
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).For packages co-submitting to JOSS
- [ ] The package has an obvious research application according to JOSS's definition
The package contains a
paper.md
matching JOSS's requirements with:
- [ ] A short summary describing the high-level functionality of the software
- [ ] Authors: A list of authors with their affiliations
- [ ] A statement of need clearly stating problems the software is designed to solve and its target audience.
- [ ] References: with DOIs for all those that have one (e.g. papers, datasets, software).
Estimated hours spent reviewing:
Hey, dear development team. It's hard to believe you complete such a huge task in 3 weeks, but you did it! I followed your installation guide on README and have this package installed on my laptop without any problem. Your function descriptions are also well-written, and I can easily see the objective of each function.
The examples are also easy to follow. Just by reading the examples, an user can roughly picture under which circumstances the functions in this package might be helpful in speeding up his or her data analysis.
This is very impressive, especially given the fact that we have other intensive labs and lectures. I just have some minor suggestions after playing with the functions for a while and reading into the source code:
README of the repo, the landing page, and the vignette are duplicate of each other. It is good to show all available information to the users, but when all pages are identical, they become redundant.
model_comparison_table
lists "List of model, X_train, y_train, X_test, y_test, scoring option" as its input. This is misleading when I tried to test scoring option
and couldn't find any information about it. After reading into the source code eventually, I found it takes train_set_cf
and test_set_cf
as the first two arguments. As to the dots arguments, I only found a script for parsing the model names, but there is no code for selecting scoring options.
confusion_matrix
also has a similar issue: Model
, X_train
, X_test
are listed in the Input column but are not input arguments in the source code. The source code also contains a labels
argument, but this argument is never used.
plot_roc
also has a misleading Input column: model
and X_valid
are not actual input arguments in the source code. In the R script, an input argument called predict_proba
is required instead.
From the observation above, my recommendation will be: if the code is adopted from somewhere else, it might be better to include a link to the source. Of course, please feel free to let me know if I misunderstood your source code or the Input column.
Overall, this package has a very interesting idea of speeding up the process of data visualization and does what it's supposed to do. You should all be proud of yourselves! Feel free to contact me if you want to discuss more.
Thank you for your comments! Your reviews are very helpful.
Based on your comments, we have made the following changes:
Here is the link to our new release:
name: RMLViz Submitting Author: Fanli Zhou (@flizhou), Anas Muhammad (@anasm-17 ), Tao Huang (@taohuang-ubc), Mike Chen (@miketianchen) Repository: https://github.com/UBC-MDS/RMLViz Version submitted: 1.1.0 Editor: Varada Kolhatkar (@kvarada )
Reviewer 1: Polina Romanchenko (@PolinaRomanchenko) Reviewer 2: Yuan-Lon Lu (@franklu2014)
Archive: TBD
Version accepted: TBD
Scope
Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):
Explain how and why the package falls under these categories (briefly, 1-2 sentences):
Who is the target audience and what are scientific applications of this package?
Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category?
If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.
Technical checks
Confirm each of the following by checking the box.
This package:
Publication options
JOSS Options
- [ ] The package has an **obvious research application** according to [JOSS's definition](https://joss.readthedocs.io/en/latest/submitting.html#submission-requirements). - [ ] The package contains a `paper.md` matching [JOSS's requirements](https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain) with a high-level description in the package root or in `inst/`. - [ ] The package is deposited in a long-term repository with the DOI: - (*Do not submit your package separately to JOSS*)MEE Options
- [ ] The package is novel and will be of interest to the broad readership of the journal. - [ ] The manuscript describing the package is no longer than 3000 words. - [ ] You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see [MEE's Policy on Publishing Code](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/journal-resources/policy-on-publishing-code.html)) - (*Scope: Do consider MEE's [Aims and Scope](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/aims-and-scope/read-full-aims-and-scope.html) for your manuscript. We make no guarantee that your manuscript will be within MEE scope.*) - (*Although not required, we strongly recommend having a full manuscript prepared when you submit here.*) - (*Please do not submit your package separately to Methods in Ecology and Evolution*)Code of conduct