Open editorialbot opened 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.08 s (1261.0 files/s, 292075.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
CSV 5 0 0 11376
Python 40 931 1409 3146
JSON 3 1 0 767
reStructuredText 28 746 1002 639
Markdown 4 118 0 351
TOML 5 48 3 243
TeX 1 13 0 148
YAML 5 20 23 144
SVG 5 0 0 98
Jupyter Notebook 1 0 1789 68
DOS Batch 1 8 1 26
make 1 6 7 11
Dockerfile 1 6 7 8
-------------------------------------------------------------------------------
SUM: 100 1897 4241 17025
-------------------------------------------------------------------------------
Commit count by author:
95 qubixes
34 Erik-Jan van Kesteren
29 Samuwhale
26 Samuel
13 Raoul Schram
Paper file info:
📄 Wordcount for paper.md
is 2230
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5281/zenodo.7697217 is OK
- 10.18637/jss.v059.i10 is OK
MISSING DOIs
- No DOI given, and none found for title: ONS methodology working paper series number 16—Syn...
- No DOI given, and none found for title: Differential privacy
- No DOI given, and none found for title: Statistical disclosure control
- No DOI given, and none found for title: k-anonymity: A model for protecting privacy
- No DOI given, and none found for title: Guidelines for Output Checking. Eurostat
- 10.29012/jpc.v1i2.570 may be a valid DOI for title: Differential privacy for statistics: What we know ...
- 10.1007/bf02985802 may be a valid DOI for title: The elements of statistical learning: data mining,...
- 10.1007/978-1-4612-0919-5_38 may be a valid DOI for title: Information theory and an extension of the maximum...
- 10.1002/wics.199 may be a valid DOI for title: The Bayesian information criterion: background, de...
- No DOI given, and none found for title: synthpop: Bespoke creation of synthetic data in R
- No DOI given, and none found for title: Simulation of synthetic complex data: The R packag...
- No DOI given, and none found for title: Datasynthesizer: Privacy-preserving synthetic data...
- No DOI given, and none found for title: To democratize research with sensitive data, we sh...
INVALID DOIs
- None
👋 @vankesteren, @PetrKorab, and @misken - This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above.
Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/7099 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:wave: @vankesteren - please reduce the length of your paper to around 1000 words. Thanks!
:wave: @vankesteren - please reduce the length of your paper to around 1000 words. Thanks!
Short version of the paper is on its way, will commit this week!
Paper has now been shortened!
@PetrKorab Thanks for spotting the broken link! I have fixed it now.
@vankesteren Please be less technical in the second part of the paper. Instead, add examples and use cases across disciplines. Thanks! (#321).
Thanks all for the progress on this review. Just a heads-up: I'll be on leave for the coming two weeks so I'll not be able to respond. Will do my best to pick this up soon after!
@vankesteren Test folder contains python codes and a test titanic dataset. Please, add well-documented automated tests (Github Action, CI..) in line with the JOOS guidelines. Please, also prepare the tests for at least one other "real-world" dataset, except the titanic one. Thanks! #326
:wave: @vankesteren, @PetrKorab, and @misken - Just checking in to see how things are going with this review. Could you each post an update to where you are currently in this review process? Thanks!
Hey @crvernon thanks for asking;
Until now, we have responded to several issues that have come out of this review (e.g., https://github.com/sodascience/metasyn/issues/317, https://github.com/sodascience/metasyn/issues/324) and we are working on changes based on the other issues. For example, we have included a medical example dataset in the package (https://github.com/sodascience/metasyn/pull/330) as a partial response to https://github.com/sodascience/metasyn/issues/321 and https://github.com/sodascience/metasyn/issues/326.
I have gotten in a bit of a time-crunch with teaching, but I'm committed to update the paper itself over the next weeks to include more examples and fewer technical details, for which we can refer to the documentation. In the meantime, we are also slightly updating the documentation to have a better flow through these technical details.
In summary, I haven't responded much over the past weeks but we have been hard at work! Let me know if you require something additional.
@vankesteren The medical dataset is ok, but for the reviewer, it is complicated to evaluate Metasyn without well-documented GitHub actions or unit tests in other CI. Please place everything into a single folder. Thanks!
@PetrKorab I have now left a comment on your main issue on this, here: https://github.com/sodascience/metasyn/issues/326#issuecomment-2478601871
We have also updated the paper, making it more legible and with better examples based on reviewer feedback, while keeping it as short as possible. The new paper is available here: https://github.com/sodascience/metasyn/blob/develop/docs/paper/paper.pdf
We are currently in the process of updating the documentation once more and then we will release Metasyn version 1.1! I will ping this thread when we do our release, that would be a good moment to have another "round" of reviews (where hopefully we will check all the boxes 😉)
@vankesteren Thanks for adding the tests, domain datasets and updating the paper!
@crvernon @misken @PetrKorab We have now released version 1.1! This is a good time to get back to the review process 😄
Submitting author: !--author-handle-->@vankesteren<!--end-author-handle-- (Erik-Jan van Kesteren) Repository: https://github.com/sodascience/metasyn Branch with paper.md (empty if default branch): Version: v1.0.2 Editor: !--editor-->@crvernon<!--end-editor-- Reviewers: @PetrKorab, @misken Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@PetrKorab & @misken, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @misken
📝 Checklist for @PetrKorab