Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.12 s (547.1 files/s, 66752.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 24 487 588 1709
Jupyter Notebook 13 11 3123 520
TeX 1 39 0 404
Markdown 7 129 0 382
YAML 14 39 9 337
reStructuredText 3 78 63 121
DOS Batch 1 8 1 27
JSON 1 0 0 26
Bourne Shell 1 8 5 18
TOML 1 5 0 18
make 1 4 6 10
-------------------------------------------------------------------------------
SUM: 67 808 3795 3572
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1450
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1098/rsta.2019.0054 is OK
- 10.1145/3328485 is OK
- 10.1109/ACCESS.2020.3047960 is OK
- 10.1109/ACCESS.2020.3031477 is OK
- 10.23919/MIPRO.2018.8400040 is OK
- 10.2139/ssrn.3503603 is OK
- 10.5281/zenodo.6303282 is OK
- 10.5281/zenodo.4724125 is OK
- 10.5281/zenodo.5012825 is OK
- 10.5281/zenodo.5061353 is OK
- 10.18653/v1/P17-1057 is OK
- 10.7275/extq-7546 is OK
MISSING DOIs
- 10.18653/v1/n16-3020 may be a valid DOI for title: "Why Should I Trust You?": Explaining the Predictions of Any Classifier
- 10.1007/978-3-642-33709-3_36 may be a valid DOI for title: Leafsnap: A Computer Vision System for Automatic Plant Species Identification
- 10.18653/v1/p19-1647 may be a valid DOI for title: Symbolic inductive bias for visually grounded learning of spoken language
- 10.18653/v1/k17-1037 may be a valid DOI for title: Encoding of phonology in a recurrent neural model of grounded speech
- 10.1613/jair.1.12967 may be a valid DOI for title: Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques
INVALID DOIs
- https://doi.org/10.1016/j.jbusres.2020.09.009 is INVALID because of 'https://doi.org/' prefix
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@Athene-ai, @sara-02 how is the review going?
I am having family's issues so my review is proceeding slowly
On Monday, 11 July 2022, Bita Hasheminezhad @.***> wrote:
@Athene-ai https://github.com/Athene-ai, @sara-02 https://github.com/sara-02 how is the review going?
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/4493#issuecomment-1179949534, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANGYEVEVFZKJFGEAIN3JIKDVTOOHXANCNFSM5ZKZ35KA . You are receiving this because you were mentioned.Message ID: @.***>
-- Daniela Cialfi, PhD
Postdoctoral researcher University "G.D'Annunzio" - Chieti Pescara Viale Pindaro 42 - 65127 Pescara ( Italy ) Tel. +39 3930571128 Email: @.*** < https://webmail.unich.it/horde/imp/message.php?mailbox=INBOX.sent-mail&index=37#>
@Athene-ai thank you for informing us and take care.
@Athene-ai, @sara-02 how is the review going?
I should be able to turn in my review by this weekend :)
my review will be ready for next week
More comments:
Installation: Does installation proceed as outlined in the documentation? : The reviewer suggests improving this part of the paper to explain in more details the installation procedure to newbie
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)? : The reviewer suggests to revise the way of writing this paper
Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)? : This part of the paper should be implemented
Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.: This part should be better explained
Functionality: This part should be better implemented and explained
Thank you @sara-02 and @Athene-ai @elboyran please let us know when you address the comments.
Dear @sara-02 and @Athene-ai thank you for your reviews.
@taless474 I see that some check-boxes are not checked from both of the reviewers and @Athene-ai gave some comments, though very brief and generic and I cannot see comments from @sara-02.
How shall I proceed? May I contact the reviewers for more information/suggestions than given? For example I am not sure how to approach "The reviewer suggests to revise the way of writing this paper", it is not clear what should be changed.
Should all check-boxes be ticked for the paper to proceed?
@elboyran For me, the paper content is acceptable. I have an issue that my automated tests failed. I will create an issue to follow up. The installation setup worked fine but to test the functionality, I need to have a model at hand. Is it possible to share a small MNIST based model or some pretrained model on hugging face that can be used to independently test the functionalities of your system?
@elboyran For me, the paper content is acceptable. I have an issue that my automated tests failed. I will create an issue to follow up. The installation setup worked fine but to test the functionality, I need to have a model at hand. Is it possible to share a small MNIST based model or some pretrained model on hugging face that can be used to independently test the functionalities of your system?
Dear @sara-02 , the DIANNA README lists all datasets and models we have used in our project. Please, have a look at https://github.com/dianna-ai/dianna#images-1 for a list of links to ONNX models trained on image datasets, including a 'binary MNIST' model.
Apart from Zenodo, all models used in our tutorials are also available in the DIANNA github repo at tutorials/models.
@elboyran For me, the paper content is acceptable. I have an issue that my automated tests failed. I will create an issue to follow up. The installation setup worked fine but to test the functionality, I need to have a model at hand. Is it possible to share a small MNIST based model or some pretrained model on hugging face that can be used to independently test the functionalities of your system?
Dear @sara-02 , the DIANNA README lists all datasets and models we have used in our project. Please, have a look at https://github.com/dianna-ai/dianna#images-1 for a list of links to ONNX models trained on image datasets, including a 'binary MNIST' model.
Apart from Zenodo, all models used in our tutorials are also available in the DIANNA github repo at tutorials/models.
Dear @sara-02, did it work for you now? Are more boxes in your review to be ticked?
More comments:
Installation: Does installation proceed as outlined in the documentation? : The reviewer suggests improving this part of the paper to explain in more details the installation procedure to newbie
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)? : The reviewer suggests to revise the way of writing this paper
Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)? : This part of the paper should be implemented
Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.: This part should be better explained
Functionality: This part should be better implemented and explained
Dear @Athene-ai ,
Could you, please, elaborate on your suggestions, especially for "The reviewer suggests to revise the way of writing this paper"?
Sure @elboyran
Here there are more comments
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)? : The reviewer suggests to revise the way of writing this paper --> With this expression I would like to say that the author should better write the paper
Sure @elboyran
Here there are more comments
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)? : The reviewer suggests to revise the way of writing this paper --> With this expression I would like to say that the author should better write the paper
Thank you @Athene-ai, but I am still unclear in what respect it should be better? The (English) language? The length? The structure? Anything else?
English language
On Monday, 12 September 2022, Elena Ranguelova @.***> wrote:
Sure @elboyran https://github.com/elboyran
Here there are more comments
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)? : The reviewer suggests to revise the way of writing this paper --> With this expression I would like to say that the author should better write the paper
Thank you, but I am still unclear in what respect it should be better? The (English) language? The length? The structure? Anything else?
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/4493#issuecomment-1243622617, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANGYEVGWKTR7AQTU76IY4X3V54KCJANCNFSM5ZKZ35KA . You are receiving this because you were mentioned.Message ID: @.***>
-- Daniela Cialfi, PhD
Postdoctoral researcher University "G.D'Annunzio" - Chieti Pescara Viale Pindaro 42 - 65127 Pescara ( Italy ) Tel. +39 3930571128 Email: @.*** < https://webmail.unich.it/horde/imp/message.php?mailbox=INBOX.sent-mail&index=37#>
English language
@Athene-ai, could you, please, give me example sentence(s) where the English of the paper is not good?
@elboyran the reviewer suggests improving the paragraph Used by
@elboyran For me, the paper content is acceptable. I have an issue that my automated tests failed. I will create an issue to follow up. The installation setup worked fine but to test the functionality, I need to have a model at hand. Is it possible to share a small MNIST based model or some pretrained model on hugging face that can be used to independently test the functionalities of your system?
Dear @sara-02 , the DIANNA README lists all datasets and models we have used in our project. Please, have a look at https://github.com/dianna-ai/dianna#images-1 for a list of links to ONNX models trained on image datasets, including a 'binary MNIST' model. Apart from Zenodo, all models used in our tutorials are also available in the DIANNA github repo at tutorials/models.
Dear @sara-02, did it work for you now? Are more boxes in your review to be ticked?
Last I was trying I was facing issues running the tests, I have updated by OS since then let me try again and revert asap.
@editorialbot assign @diehlpk as editor
:wave: folks – @diehlpk has kindly volunteered to step in as the handling editor here as @taless474 is not currently available to edit. Thanks Patrick!
Assigned! @diehlpk is now the editor
Hi @Athene-ai, @sara-02 how is your review going?
@diehlpk my review was completed
@sara-02 how is your review going?
@Athene-ai Can you please check the reviewer list? Not all boxes are checked.
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@diehlpk just filled
@elboyran I was able to pip install and successfully import in python 3.8 Suggestion for improving 1st time trying folks, to train the MNIST model, https://github.com/dianna-ai/dianna-exploration/tree/main/example_data/model_generation/MNIST can the dataset be loaded from skelearn dataset itself instead of from a path?
In line with the above ipynb and the documentations readthedocs, (maybe I missing it) but can there be one example that runs end-to-end like the same notebook has model training as well as explanation part instead of training in one and assuming loading in the read the docs. This can help give the newcomers a sense of how the workflow is end2end.
@Athene-ai I was able to pip install and successfully import in python 3.8 Suggestion for improving 1st time trying folks, to train the MNIST model, https://github.com/dianna-ai/dianna-exploration/tree/main/example_data/model_generation/MNIST can the dataset be loaded from skelearn dataset itself instead of from a path?
Loaded from skelearn
In line with the above ipynb and the documentations readthedocs, (maybe I missing it) but can there be one example that runs end-to-end like the same notebook has model training as well as explanation part instead of training in one and assuming loading in the read the docs. This can help give the newcomers a sense of how the workflow is end2end.
perfect
@elboyran I was able to follow the testing setup https://dianna.readthedocs.io/en/latest/developer_info.html mentioned here. My tests with tox (conda python3.8 env) worked as given in the screenshot. However running the pytest -v command from the same test folder gives an error
@elboyran my pytest -v report
Dear @sara-02,
I'm one of the co-authors of the paper. Thank you both for the suggestions you made above. I created issues to address them (soon) and improve our repository.
I see one unticked box left "Functionality: Have the functional claims of the software been confirmed?" and I'm not clear how to interpret this. Did you read about specific functionality that you couldn't find in our software or were you just not able to try everything out yet?
Thanks a lot, Christiaan
@sara-02 Could you please have a look?
@cwmeijer @diehlpk I have checked the functionality column and was able to able to run the MNIST tutorial given here in the doc
All the work looks good from my side, we are good to publish.
@editorialbot check references
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot generate pdf
@editorialbot check references
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1098/rsta.2019.0054 is OK
- 10.1145/3328485 is OK
- 10.1109/ACCESS.2020.3047960 is OK
- 10.1109/ACCESS.2020.3031477 is OK
- 10.23919/MIPRO.2018.8400040 is OK
- 10.2139/ssrn.3503603 is OK
- 10.5281/zenodo.6303282 is OK
- 10.5281/zenodo.4724125 is OK
- 10.5281/zenodo.5012825 is OK
- 10.5281/zenodo.5061353 is OK
- 10.18653/v1/P17-1057 is OK
- 10.7275/extq-7546 is OK
MISSING DOIs
- 10.18653/v1/n16-3020 may be a valid DOI for title: "Why Should I Trust You?": Explaining the Predictions of Any Classifier
- 10.1007/978-3-642-33709-3_36 may be a valid DOI for title: Leafsnap: A Computer Vision System for Automatic Plant Species Identification
- 10.18653/v1/p19-1647 may be a valid DOI for title: Symbolic inductive bias for visually grounded learning of spoken language
- 10.18653/v1/k17-1037 may be a valid DOI for title: Encoding of phonology in a recurrent neural model of grounded speech
- 10.1613/jair.1.12967 may be a valid DOI for title: Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques
INVALID DOIs
- https://doi.org/10.1016/j.jbusres.2020.09.009 is INVALID because of 'https://doi.org/' prefix
@editorialbot commands
Submitting author: !--author-handle-->@elboyran<!--end-author-handle-- (Elena Ranguelova) Repository: https://github.com/dianna-ai/dianna Branch with paper.md (empty if default branch): Version: v0.4.2 Editor: !--editor-->@diehlpk<!--end-editor-- Reviewers: @Athene-ai, @sara-02 Archive: 10.5281/zenodo.7387004
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Athene-ai & @sara-02, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @taless474 know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @sara-02
📝 Checklist for @Athene-ai