Closed editorialbot closed 9 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.13 s (788.0 files/s, 126413.9 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 47 2020 3800 7060
YAML 28 336 273 1139
Markdown 5 282 0 438
reStructuredText 15 242 449 240
TeX 1 21 0 151
SVG 3 2 0 89
Bourne Shell 2 20 15 47
DOS Batch 1 8 1 26
make 1 4 7 9
TOML 1 0 0 6
-------------------------------------------------------------------------------
SUM: 104 2935 4545 9205
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1122
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.1109/72.572108 may be a valid DOI for title: Supervised neural networks for the classification of structures
- 10.1109/tnn.2008.2010350 may be a valid DOI for title: Neural network for graphs: A contextual constructive approach
- 10.1109/msp.2017.2693418 may be a valid DOI for title: Geometric deep learning: going beyond Euclidean data
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@idoby, @sepandhaghighi βΒ This is the review thread for the paper. All of our communications will happen here from now on.
Please read the "Reviewer instructions & questions" in the first comment above. Please create your checklist typing:
@editorialbot generate my checklist
As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/5713
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
@sepandhaghighi β looks like there's a whitespace character at the start of that command to @editorialbot which is stopping it from responding (a bug we should fix).
Could you retry in a new comment, ensuring the sentence starts with @editorialbot
.
:wave: @idoby & @sepandhaghighi β just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?
π @idoby & @sepandhaghighi β just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?
@arfon π I will complete my review in the next few days π―
π @idoby & @sepandhaghighi β just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?
Thanks for the reminder, had forgotten about this. Will comment soon
@diningphil, thanks for submitting this package, it seems like a lot of thought and effort went into it!
A few comments:
build
, wheel
, pytest
and black
, etc, which are required for PyDGN's development workflow but not required to use PyDGN.Overall, I think this is very good work! I will be digging deeper into the software itself soon.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @idoby,
First of all, thank you for the constructive and positive feedback. We were able to simplify a lot the installation process thanks to your comments. Below you can find our response so that we can continue the discussion.
pip install pydgn
command (please refer to README.md in the main branch). We also specified the dependencies in the toml
file and removed the legacy setup
files as suggested. Only the strictly required dependencies are now listed. Thank you very much for the help, now the code looks much cleaner.As a last note, please note that the example usage is shown in the readme file, and in the aforementioned tutorial the user can find an explanation of the configuration files and how to use them to setup a proper experiment. Examples of configuration files can be also found in the examples
folder.
Thank you again for the help. We are happy to discuss any more suggestions, if needed, related to the new version of the paper. I'd also like to tag @arfon to show that the discussion is ongoing =).
Best regards, the authors!
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@diningphil I think the intro reads much better and is better motivated now, and the differences between the various existing packages and yours are clearer. The practical use and motivation for the features and design decisions reads well.
Note that something seems to have not rendered well in line 83 (seems like a header didn't render) and that PyDGN is styled inconsistently on line 95.
Regarding the installation procedure: PyDGN should install much more easily now. Please consider not forcing the user to install gpustat, since not all installs have CUDA. The same applies to the wandb dependency: please consider detecting wandb at runtime, not forcing your users to install wandb if they don't use it.
Besides that, I think we're good to go. Please consider updating the paper branch with the latest changes so that when the branch is archived upon acceptance, it would include the changes to the installation procedure and any other enhancement you made to the software.
Good job! π
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/72.572108 is OK
- 10.1109/TNN.2008.2005605 is OK
- 10.1109/TNN.2008.2010350 is OK
- 10.1109/MSP.2017.2693418 is OK
- 10.1186/s40649-019-0069-y is OK
- 10.1016/j.neunet.2020.06.006 is OK
- 10.1109/TNNLS.2020.2978386 is OK
- 10.1109/TKDE.2020.2981333 is OK
- 10.1109/MCI.2020.3039072 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Hmm, this list doesn't seem to include all of the references in the paper...
Dear @idoby,
We are happy the changes are satisfactory. In order:
Thank you again for your careful review. We remain on stand-by for additional exchanges if required =).
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
What? Every ML conference paper is supposed to publish a DOI. For example, the DOI for Gilmer et el. Neural message passing for Quantum chemistry is: https://dl.acm.org/doi/10.5555/3305381.3305512
edit: weird, seems like that DOI is broken...
I will double check again! For ArXiv only papers that will not be possible for sure though.
Edit: The paper you mentioned was published at ICML 17 on PMLR (http://proceedings.mlr.press/v70/gilmer17a) but I see no DOI when downloading the official bibtex file.
Edit2: The same goes for NeurIPS proceedings (https://papers.nips.cc/paper_files/paper/2022) -- I cannot find a DOI in the official bibtex of the published papers.
Maybe I'm looking in the wrong place, but I never saw a DOI for ICML of NeurIPS I think.. just to mention two popular ones
:wave: @sepandhaghighi β just checking in again here. It looks like you've made a good start on your review but haven't completed it yet. Do you think you might be able to get it done in the next couple of weeks?
π @sepandhaghighi β just checking in again here. It looks like you've made a good start on your review but haven't completed it yet. Do you think you might be able to get it done in the next couple of weeks?
Hi @arfon π Sorry for my late response. I was a little preoccupied. I will be able to finish my review within the next 2 to 3 days π―
SH
@arfon @diningphil π
I apologize for my delayed reply Excellent work π₯
Only two comments:
SH
Dear @sepandhaghighi,
Thank you so much for your review! We have implemented the following changes (in both paper and main branches):
If you are happy with our changes, I think we might ask @arfon to take the final decision.
Best regards, the authors
@diningphil What makes it Linux only?
@diningphil What makes it Linux only?
Hi @idoby, PyDGN has never been tested on Windows, and the two commands to prepare datasets and launch experiments pydgn-train
and pydgn-dataset
assume a Linux/Unix distro.
Perhaps it would be best to write tested on Linux/Unix only ? What do you think?
@diningphil I just ran your tests and both of the commands you mentioned (taken from the examples in the README file) on the latest macOS (Sonoma) with the latest Python etc. The tests pass with warnings coming from dependencies (pyg etc) and the dataset and train commands also seem to work, albeit training seems to be very slow.
If you feel your tests are comprehensive enough, you can claim that your package works on macOS too. I don't see anything in the code that assumes it is running on Linux specifically, or Windows either, for that matter, but I don't have a Windows system to test on.
@idoby wow, we really thank you for testing this yourself. We will try to test on Windows and see how it goes. If that is the case, we will remove the Ubuntu badge and then claim that it should work on most systems. We will keep you posted.
Thank you again for the help!!
It's a best practice to set up a CI workflow to test every commit automatically on multiple operating systems and versions of Python. You should be able to find plenty of examples online, for example using GitHub Actions.
It's probably a good idea if you intend to continue developing this package.
Anyway, good luck! This paper can be published as far as I'm concerned. π₯³
I'll try to set it up directly with GitHub Actions as I did for Linux :)
EDIT: it was actually very simple, and the tests passed for windows-latest, ubuntu-latest, and macos-latest. So @sepandhaghighi I will also remove ubuntu badge because it runs on any system apparently =)
If you are happy with our changes, I think we might ask @arfon to take the final decision.
This is looking good. Thanks @diningphil! @sepandhaghighi β could I ask that you check off any remaining review items in your checklist that are left remaining (assuming you believe the response from the author is sufficient).
@sepandhaghighi β could I ask that you check off any remaining review items in your checklist
@arfon Everything looks good to me π―
So @sepandhaghighi I will also remove ubuntu badge because it runs on any system apparently =)
@diningphil Good news π₯
Please consider the following:
You should update pyproject.toml
, classifiers
section and add Operating System :: OS Independent
instead of Operating System :: POSIX :: Linux
Including the minimum required version of Python in your document is recommended.
SH
Please consider the following:
1. You should update `pyproject.toml`, `classifiers` section and add `Operating System :: OS Independent` instead of `Operating System :: POSIX :: Linux` 2. Including the minimum required version of Python in your document is recommended.
Both are done! Thank you for spotting them!
@diningphil βΒ looks like we're very close to being done here. I will circle back here next week, but in the meantime, please give your own paper a final read to check for any potential typos etc.
After that, could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @arfon,
Please let me know if anything else is required.
Many thanks!
@editorialbot set v1.5.0 as version
Done! version is now v1.5.0
@editorialbot set 10.5281/zenodo.8396373 as archive
Done! archive is now 10.5281/zenodo.8396373
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Submitting author: !--author-handle-->@diningphil<!--end-author-handle-- (Federico Errica) Repository: https://github.com/diningphil/PyDGN/ Branch with paper.md (empty if default branch): paper Version: v1.5.0 Editor: !--editor-->@arfon<!--end-editor-- Reviewers: @idoby, @sepandhaghighi Archive: 10.5281/zenodo.8396373
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@idoby & @sepandhaghighi, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @idoby
π Checklist for @sepandhaghighi