Closed editorialbot closed 11 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.05 s (543.1 files/s, 77300.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 14 304 918 1251
Jupyter Notebook 1 0 653 287
Markdown 3 55 0 185
TeX 1 11 0 123
YAML 3 2 4 90
TOML 1 6 0 50
reStructuredText 4 36 54 44
DOS Batch 1 8 1 26
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 29 426 1637 2065
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.jmgm.2007.02.005 is OK
- 10.1021/jm049654z is OK
- 10.1002/9783527665143.ch10 is OK
- 10.1016/j.drudis.2011.02.011 is OK
- 10.1111/j.1476-5381.2010.01127.x is OK
- 10.1186/s13321-020-00444-5 is OK
- 10.1021/ci050296y is OK
- 10.1021/jm300687e is OK
- 10.1186/1758-2946-4-27 is OK
- 10.1016/j.eswa.2008.01.039 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Wordcount for paper.md
is 1453
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@richardjgowers as a reviewer am I supposed to rerun the benchmarking notebook (as it would qualify as "original results") and make sure that the results are the same, or is making sure that the notebook is runnable enough? It's quite slow to run (but the notebook works).
@exs-cbouy yes if you could rerun the notebook in the repo and verify the results, i.e. a manual notebook regression test
@richardjgowers as a reviewer am I supposed to rerun the benchmarking notebook (as it would qualify as "original results") and make sure that the results are the same, or is making sure that the notebook is runnable enough? It's quite slow to run (but the notebook works).
Hi @exs-cbouy , thank you very much for your in depth review. We are working on addressing all points. As for rerunning the notebook, this has been refactored somewhat in addressing your comments. The most up to date version in the development branch (not yet merged to main) would be the best to use, available here: https://github.com/JustinYKC/FEPOPS/blob/7b929a16dc1eadfa1f4abfd4e80a4c2a7982cad9/Explore_DUDE_diversity_set.ipynb
In addition, pre-generated FEPOPS database files for the DUDE diversity set are available here: https://doi.org/10.6084/m9.figshare.23951445.v3 which will massively speedup running the notebook. These new descriptors have been generated after addressing your point on feature scaling before k-medoid clustering and so differ from earlier versions.
Once we have addressed all changes on the development branch we will merge to main and notify all here.
Many thanks, Steve
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @richardjgowers and @exs-cbouy, we have now addressed the raised issues and released a new version of OpenFEPOPS (v.1.8.0) which includes bugfixes and updates to functionality, online documentation and the manuscript. Many thanks, Steve
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thank you @stevenshave for this open-source version of FEPOPS and the benchmark with commonly used fingerprints, I believe this will be a useful 3D representation for a variety of use cases, as it becomes more widely available to the public.
@richardjgowers All my comments have been addressed and my checklist is complete, LGTM!
@hannahbaumann thanks for agreeing to review this JOSS submission. Are you able to get started on your review?
Thank you for your review @hannahbaumann , we believe we have addressed all comments and fixed the installation issue in the new 1.8.1 OpenFEPOPS release which is now on pypi.
Best, Steve
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@richardjgowers All my comments have been addressed and my checklist is complete!
@editorialbot set v1.8.2 as version
Done! version is now v1.8.2
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
Hi @stevenshave we're onto the final checklist above. Can you make a zenodo/figshare release of 1.8.2 and post the DOI here?
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.jmgm.2007.02.005 is OK
- 10.1021/jm049654z is OK
- 10.1002/9783527665143.ch10 is OK
- 10.1016/j.drudis.2011.02.011 is OK
- 10.1111/j.1476-5381.2010.01127.x is OK
- 10.1186/s13321-020-00444-5 is OK
- 10.1021/ci050296y is OK
- 10.1021/jm300687e is OK
- 10.1186/1758-2946-4-27 is OK
- 10.1016/j.eswa.2008.01.039 is OK
- 10.1021/ci970431 is OK
- 10.1021/ci990307l is OK
- 10.1016/0040-4020(80)80168-2 is OK
- 10.1016/j.patrec.2005.10.010 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Hi @richardjgowers, I've just added the 1.8.2 source archive to figshare under the URL: https://doi.org/10.6084/m9.figshare.24477184.v1
and DOI: 10.6084/m9.figshare.24477184.v1
Many thanks to you and the reviewers for progressing this! Steve
@editorialbot set 10.6084/m9.figshare.24477184.v1 as archive
That doesn't look like a valid DOI value
@editorialbot pretty please set https://doi.org/10.6084/m9.figshare.24477184.v1 as archive
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot set https://doi.org/10.6084/m9.figshare.24477184.v1 as archive
Done! archive is now 10.6084/m9.figshare.24477184.v1
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.jmgm.2007.02.005 is OK
- 10.1021/jm049654z is OK
- 10.1002/9783527665143.ch10 is OK
- 10.1016/j.drudis.2011.02.011 is OK
- 10.1111/j.1476-5381.2010.01127.x is OK
- 10.1186/s13321-020-00444-5 is OK
- 10.1021/ci050296y is OK
- 10.1021/jm300687e is OK
- 10.1186/1758-2946-4-27 is OK
- 10.1016/j.eswa.2008.01.039 is OK
- 10.1021/ci970431 is OK
- 10.1021/ci990307l is OK
- 10.1016/0040-4020(80)80168-2 is OK
- 10.1016/j.patrec.2005.10.010 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/bcm-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/4759, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
@stevenshave as the AEiC on this track I will now help to process the final steps. I have checked the repository, this review, the paper, and the archive link. Most seems in order. However I have the below points that require your attention:
On the paper:
neighbouring
) and American English (e.g. featurization
) are used. Please correct to be consistent. totalling
, which should perhaps be totaling
.Comments/recommendations (not required):
url
entry in your bib file e.g. to: https://dl.acm.org/doi/abs/10.5555/1283383.1283494 (looks like their DOI does not resolve but this url does). @editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @Kevin-Mattheus-Moerman, many thanks for your work on this.
I have now standardised to British English, replacing featurization with featurisation. I looked into the totalling vs totaling and it seems unclear, with both being acceptable, but have gone with your suggestion of totaling. I discovered another typo of "fingeprint" which has now been corrected.
After a brief discussion with coauthors, one would strongly prefer keeping the addresses as prescribed by our departments for use in publishing. I hope that it is OK if these remain as they are.
I have added the suggested URL to the Arthur 2007 reference, which now appears correctly.
My thanks to all reviewers and editors for the work you have all contributed to this, it has been a great experience!
Best, Steve
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@stevenshave thanks that all looks good now then.
@editorialbot accept
Doing it live! Attempting automated processing of paper acceptance...
Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.
If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.
You can copy the contents for your CITATION.cff file here:
``` cff-version: "1.2.0" authors: - family-names: Chen given-names: Yan-Kai orcid: "https://orcid.org/0000-0001-7161-9503" - family-names: Houston given-names: Douglas R. orcid: "https://orcid.org/0000-0002-3469-1546" - family-names: Auer given-names: Manfred orcid: "https://orcid.org/0000-0001-8920-3522" - family-names: Shave given-names: Steven orcid: "https://orcid.org/0000-0001-6996-3663" contact: - family-names: Shave given-names: Steven orcid: "https://orcid.org/0000-0001-6996-3663" doi: 10.6084/m9.figshare.24477184.v1 message: If you use this software, please cite our article in the Journal of Open Source Software. preferred-citation: authors: - family-names: Chen given-names: Yan-Kai orcid: "https://orcid.org/0000-0001-7161-9503" - family-names: Houston given-names: Douglas R. orcid: "https://orcid.org/0000-0002-3469-1546" - family-names: Auer given-names: Manfred orcid: "https://orcid.org/0000-0001-8920-3522" - family-names: Shave given-names: Steven orcid: "https://orcid.org/0000-0001-6996-3663" date-published: 2023-11-09 doi: 10.21105/joss.05763 issn: 2475-9066 issue: 91 journal: Journal of Open Source Software publisher: name: Open Journals start: 5763 title: "OpenFEPOPS: A Python implementation of the FEPOPS molecular similarity technique" type: article url: "https://joss.theoj.org/papers/10.21105/joss.05763" volume: 8 title: "OpenFEPOPS: A Python implementation of the FEPOPS molecular similarity technique" ```
If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.
πππ π Toot for this paper π πππ
π¨π¨π¨ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! π¨π¨π¨
Here's what you must now do:
Any issues? Notify your editorial technical team...
Submitting author: !--author-handle-->@stevenshave<!--end-author-handle-- (Steven Shave) Repository: https://github.com/JustinYKC/FEPOPS Branch with paper.md (empty if default branch): Version: v1.8.2 Editor: !--editor-->@richardjgowers<!--end-editor-- Reviewers: @hannahbaumann, @exs-cbouy Archive: 10.6084/m9.figshare.24477184.v1
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@hannahbaumann & @exs-cbouy, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @richardjgowers know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @exs-cbouy
π Checklist for @hannahbaumann