Closed editorialbot closed 5 months ago
Thanks @hasan-sayeed for the update. @mkhorton and @ml-evs does this motivating example look good to you?
@sgbaird please let us know when you plan to address the remaining comments of the reviewers.
The example looks good but needs more explanation; for example, am I meant to be mentally plotting how novelty changes across the folds pasted above? I also couldn't see it mentioned in the docs or website when I last looked (and there have been no further commits).
Thanks @ml-evs. @sgbaird and @hasan-sayeed can you give us an estimate for when you will be able to work through the remaining issues?
Hello @sgbaird and @hasan-sayeed, I am the Associate Editor in Chief for the Physics and Engineering track. At the end of January, @phibeck had asked for a commitment to finish making changes to address the reviewer comments within four weeks. We're past that now, and there are still some open issues to fix.
At this point, we need to set a deadline to make the required changes within one week—otherwise, we will close the review issue. The reviewers have already invested a lot of time, and we want to respect that, but we also need to finish this in a reasonable amount of time.
Can you respond and commit to making the required changes within one week?
Hi all, I'm sorry for the trouble this has been. It wasn't my intention to draw this out so long. Between graduating, moving to a foreign country, a now three-month-old newborn, and taking a job where manuscripts and code contributions are not weighted directly as success metrics, it's been difficult to give the submission and those involved due justice in a reasonable timeframe. Things are starting to settle for me, but the timeline hasn't been fair to the reviewers or editors. Sorry 😞
I've done my best to address all points from the reviewers. Please have a look, and let me know if there are additional changes that need to be made or if I've missed something.
Try the Sphinx sphinx-prompt plugin, to make your examples easier to copy-and-paste
Added
I am actually having trouble trying the example on your docs home page; can I verify it's correct and current? For example, I do not see MPTSMetrics10 available to import from the init.py, I believe the example should look like: Updated
Unfortunately,
mptm.evaluate_and_record(fold, gen_structures)
seems to returnNone
when I ran this, after printing information onElementProperty
andSiteStatsFingerprint
.
It records it to an internal variable. If you think it would be better to also return it, I can update that. If so, would it be better to return the full dictionary or just the metrics associated with that fold? If it's fine as-is, I've updated the example with the print
line directly after to make it clearer.
>>> from matbench_genmetrics.mp_time_split.utils.gen import DummyGenerator
>>> from matbench_genmetrics.core.metrics import MPTSMetrics10, MPTSMetrics100, MPTSMetrics1000, MPTSMetrics10000
>>> mptm = MPTSMetrics10(dummy=True)
>>> for fold in mptm.folds:
>>> train_val_inputs = mptm.get_train_and_val_data(fold)
>>> dg = DummyGenerator()
>>> dg.fit(train_val_inputs)
>>> gen_structures = dg.gen(n=mptm.num_gen)
>>> mptm.evaluate_and_record(fold, gen_structures)
>>> print(mptm.recorded_metrics)
- Likewise, I struggled to install via
pip
in a clean conda environment: I managed to install it in the end, but a pure pip install does not seem possible on the machine I tested it on (macOS 13.1, Python 3.9).
I think this should be resolved now. I don't own a Mac, but it seems OK on Windows and Linux. One of the difficult packages in terms of dependencies was pyxtal
, which I was only using lightly for testing, so I replaced it with some custom code.
Given that MP does sometimes deprecate older materials/structures over time, is there a plan to ensure that benchmarks generated with older versions will be compatible with newer versions (or at least some way to alert the user if not)?
I've given this some thought, but I don't have a good solution in mind. I think alerting the user to any potential differences makes sense. The datasets are stored on FigShare, so that helps with reproducibility. Since the benchmarks may take a long time to run, it is likely better to not require people to rerun it frequently, but keeping the datasets timely is also important.
Figure 1 caption docs links are not being rendered as links
Updated
2. Since the initial submission, the
matbench-discovery
package has also been released (with preprint). I think these two packages nicely complement each other, but I can see people getting confused (similar names, similar aims). Perhaps a sentence or two highlighting the differences would be helpful here?
Added to the README and now cited in manuscript
The example looks good but needs more explanation; for example, am I meant to be mentally plotting how novelty changes across the folds pasted above? I also couldn't see it mentioned in the docs or website when I last looked (and there have been no further commits).
Added explanations and a figure. Lmk if OK or needs changes.
Regarding the example not working, I raised an issue a while back here
This should be resolved now.
Thank you to everyone for your patience and feedback.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for the updates @sgbaird and for the extra legwork. I'm now happy to recommend this for acceptance and have completed my checklist above. The package is well-developed, well-documented, and well-situated in the context of the field. The MP time split example results are interesting and I would like to give them more thought, but I don't think that should hold up this paper if everyone else is happy.
Thank you @sgbaird for the update and for working through the remaining issues. We understand that life gets in the way sometimes, so we appreciate you taking the time to wrap things up. (Congratulations, too!)
Thank you very much @ml-evs for your thorough review! :tada: This means two out of three reviewers have recommended acceptance. @mkhorton could you let us know whether the recent updates look good to you?
@mkhorton could you please take a look and let us know what you think of the update, thank you!
I'm also happy that my questions have been responded to, thank you @sgbaird and @phibeck.
If it's fine as-is, I've updated the example with the print line directly after to make it clearer.
This is sufficient, it was just unclear from the example.
The datasets are stored on FigShare, so that helps with reproducibility.
This should be sufficient too. I think this is a larger community problem, so I don't want to hold up this specific review due to this issue.
Between graduating, moving to a foreign country, a now three-month-old newborn, and taking a job where manuscripts and code contributions are not weighted directly as success metrics, it's been difficult to give the submission and those involved due justice in a reasonable timeframe. Things are starting to settle for me, but the timeline hasn't been fair to the reviewers or editors.
No need to apologise @sgbaird, at least not to myself as reviewer, we're all trying our best! It's been difficult for me to find time to review too. Congratulations on the newborn :)
I will need to do a final re-try of the installation before I can sign off the last remaining items.
Okay, thanks for the update @mkhorton. Let us know when you get a chance for the re-try.
Hi @mkhorton have you had a chance to take a look yet?
Hi @mkhorton could you please let us know when you will be able to review the changes? We'd like to wrap this up as soon as possible. Thank you.
Thanks @phibeck, I'm checked off. I have pulled the latest version and have verified installation and functionality.
I do want to echo my previous comment however: this package is a great start and has a robust statement of need, however I do think there need to be better metrics included, especially for validity, but also questions of novelty can be subtle too (for example, the case of ordered approximations). I have no doubt the authors will continue to improve the package over time and support its publication, but I hope additional metrics might be considered -- this might even be a good area for collaboration within the broader community.
Thank you @mkhorton for finishing your review and for your evaluation and recommendation for future development. Thank you, @ml-evs, @mkhorton, and @jamesrhester for your thorough reviews and for your patience!
@sgbaird the reviewers have recommended the submission for publication. There are a few more steps before we finalize the publication. At this point could you please:
I can then move forward with recommending acceptance of the submission.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
Thank you @mkhorton for finishing your review and for your evaluation and recommendation for future development. Thank you, @ml-evs, @mkhorton, and @jamesrhester for your thorough reviews and for your patience!
@sgbaird the reviewers have recommended the submission for publication. There are a few more steps before we finalize the publication. At this point could you please:
- [x] Make a tagged release of your software, and list the version tag of the archived version here.
I released v0.6.5 a bit ago, and there are no further commits so far.
[x] Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository)
[x] Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
- [x] Please list the DOI of the archived version here.
Let me know if you need anything else from my end!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK
MISSING DOIs
- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@sgbaird thanks! Here are a few more comments/suggestions for the manuscript. Please have a look when you have a moment.
matbench_genmetrics.core
since it exceeds the linewidth (not sure why this isn't done automatically..)Since I cannot find a record of the last two co-authors' contribution in your repository, could you please state their contributions here for the record of review? You can check out the guidance for authorship here: https://joss.readthedocs.io/en/latest/submitting.html#authorship Thanks!
@editorialbot set v0.6.5 as version
Done! version is now v0.6.5
@editorialbot set 10.5281/zenodo.10840604 as archive
Done! archive is now 10.5281/zenodo.10840604
@sgbaird thanks! Here are a few more comments/suggestions for the manuscript. Please have a look when you have a moment.
- line 12: I would omit the square brackets since the citation seems part of the sentence
Done!
- line 27: the acronyms aren't used in the remainder of the manuscript, so it doesn't seem necessary to introduce them here
Agreed, removed
- line 57: perhaps you could introduce a linebreak in front of
matbench_genmetrics.core
since it exceeds the linewidth (not sure why this isn't done automatically..)
EDIT: Changed the wording to get the linebreak
- line 165ff: it seems this reference has been published in the meantime, please update the reference accordingly (nature.com/articles/s41524-023-00987-9)
Updated!
Since I cannot find a record of the last two co-authors' contribution in your repository, could you please state their contributions here for the record of review? You can check out the guidance for authorship here: joss.readthedocs.io/en/latest/submitting.html#authorship Thanks!
@JosephMontoya-TRI supplied code and an implementation related to the mp-time-split
portion.
@sp8rks participated in the ideation and development / funding, vision, etc.
Had trouble getting the line break, so I updated the wording slightly to get it instead: https://github.com/sparks-baird/matbench-genmetrics/commit/d05ead2e72184227f79af6c8e912b6c5f342decf
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @sgbaird
EDIT: Changed the wording to get the linebreak
Looks good, thanks!
- line 165ff: it seems this reference has been published in the meantime, please update the reference accordingly (nature.com/articles/s41524-023-00987-9)
Updated!
It seems that the reference zhao_physics_2023
didn't make it into the .bib
file, could you please push this last change? Thanks!
@JosephMontoya-TRI supplied code and an implementation related to the
mp-time-split
portion. @sp8rks participated in the ideation and development / funding, vision, etc.
Okay, thank you for clarifying!
Sorry about that. Not sure what happened there. I added it in just now. Does it look ok?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Sorry about that. Not sure what happened there. I added it in just now. Does it look ok?
No problem. Looks good now, thanks!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41524-023-00987-9 is OK
- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK
MISSING DOIs
- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...
INVALID DOIs
- None
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41524-023-00987-9 is OK
- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK
MISSING DOIs
- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...
INVALID DOIs
- None
:wave: @openjournals/bcm-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/5342, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
I guess the Alverson reference needs to be updated to the published version. Will try to address shortly - no worries if too late.
EDIT: added in https://github.com/sparks-baird/matbench-genmetrics/commit/8f7102d78cf03df66c2db16bcb29922fb7d01db1
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Okay, thanks!
:wave: @Kevin-Mattheus-Moerman this paper is ready for acceptance!
@sgbaird as AEiC for JOSS I will now help to process this submission for acceptance in JOSS. I have checked this review, your repository, the archive link, and the paper. Most seems in order, I only have the below points that require your attention:
practioners
this should perhaps be practitioners
USA
as United States of America
. @sgbaird as AEiC for JOSS I will now help to process this submission for acceptance in JOSS. I have checked this review, your repository, the archive link, and the paper. Most seems in order, I only have the below points that require your attention:
- [x] Check spelling for
practioners
this should perhaps bepractitioners
- [x] In your affiliations, please spell out
USA
asUnited States of America
.- [x] In the 3rd affiliation, please add the country (and you may remove the zip code if you like, this is not needed).
- [x] For the reference "JARVIS-Leaderboard: A Large Scale Benchmark of Materials Design Methods", you cite an Arxiv link, if you think it is appropriate, you could instead refer to the version which, if I'm not mistaken, now appears published here: doi.org/10.1038/s41524-024-01259-w
- [x] On the use of ChatGPT can you please clarify how it was used here in more detail?
Hi, I think all of these are addressed now. Can you take a look?
@editorialbot generate pdf
Submitting author: !--author-handle-->@sgbaird<!--end-author-handle-- (Sterling Baird) Repository: https://github.com/sparks-baird/matbench-genmetrics Branch with paper.md (empty if default branch): Version: v0.6.5 Editor: !--editor-->@phibeck<!--end-editor-- Reviewers: @ml-evs, @mkhorton, @jamesrhester Archive: 10.5281/zenodo.10840604
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@ml-evs & @mkhorton & @jamesrhester, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @phibeck know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @ml-evs
📝 Checklist for @jamesrhester
📝 Checklist for @mkhorton