Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @zoometh, @mschordan, @jonjoncardoso it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Wordcount for paper.md
is 988
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1007/3-540-44585-4_8 is OK
- 10.1023/A:1027328830731 is OK
- 10.1137/0201010 is OK
- 10.1016/B978-0-12-417750-5.50022-1 is OK
- 10.1137/0216062 is OK
- 10.1007/978-3-540-77050-3_17 is OK
- 10.1016/0898-1221(81)90008-0 is OK
- 10.1016/0890-5401(90)90025-D is OK
MISSING DOIs
- None
INVALID DOIs
- None
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.08 s (719.2 files/s, 94295.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 37 1236 1297 4136
Markdown 2 89 0 289
reStructuredText 14 120 157 199
TeX 1 7 0 82
YAML 2 6 6 48
DOS Batch 1 8 1 26
make 1 4 7 9
TOML 1 1 0 8
-------------------------------------------------------------------------------
SUM: 59 1471 1468 4797
-------------------------------------------------------------------------------
Statistical information for the repository 'd5f6644ece31ade8fd9a9522' was
gathered on 2021/07/21.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Francesco Andreuzzi 9 597 331 1.76
SV-97 3 880 453 2.52
francescoandreuzzi 399 28541 22022 95.72
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
Francesco Andreuzzi 6092 1020.4 3.6 11.46
SV-97 601 68.3 0.1 0.00
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon list reviewers
Here's the current list of reviewers: https://bit.ly/joss-reviewers
@whedon assign @zoometh as reviewer
OK, @zoometh is now a reviewer
@whedon add @jonjoncardoso as reviewer
OK, @jonjoncardoso is now a reviewer
@whedon add @zoometh as reviewer
OK, @zoometh is now a reviewer
@whedon add @mschordan as reviewer
OK, @mschordan is now a reviewer
:wave: @arfon : for some reason whedon isn’t adding @zoometh as an assigned reviewer for this submission even though they were added. Do you know why?
@mjsottile - please use @openjournals/dev for technical problems, and @openjournals/joss-eics
for the editors in chief for other things, in case one person, for example @arfon, is on vacation. And I'll check on this now
@whedon remove @zoometh as reviewer
OK, @zoometh is no longer a reviewer
@whedon add @zoometh as reviewer
OK, @zoometh is now a reviewer
Ok, this is now fine, as seen in the first comment in this issue, where it now says Reviewers: @jonjoncardoso, @mschordan, @zoometh
When I started, zoometh was listed twice, since you had added them twice, so I removed them then readded them so they would only be listed once. The whedon command also shows this correctly in its response immediately above, but this is less important than what the first comment shows. Now you have to manually edit the first comment and add a checklist for zoometh, as whedon only generates these checklists once, when the review issue is created, based on the reviewers that were assigned at the time (in the pre-review issue). If you need help with this, let @openjournals/joss-eics
know.
@danielskatz thanks. When I looked at it last night the reviewer did show up in the issue body and had a checklist, but was missing from the list of assignees for the GitHub issue itself. They appear to be listed as an assignee now.
@fAndreuzzi I'll suggest reordering the parts of your paper & to modify some:
I've also noted:
Hi @zoometh, thank you for your feedback.
I worked a little bit on the README (fixed the typo you mentioned, and added a new section to explain in detail how to report a bug). In this section I explain how to submit a pull request, is it enough for you?
I'm going to work on the paper to apply your suggestions, I'll update you here when I'm satisfied with the changes.
EDIT: I updated the documentation, in particular the sentence you mentioned which was clearly unintelligible.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Following the suggestions by @zoometh I moved up the "Statement of need", and shrinked the dimension of the summary. Let me know what you think, and if you have any other suggestion.
@fAndreuzzi I'm still thinking that the paper plan can be improved:
BisPy
is a Python package for the computation of maximum bisimulation" should close the Summary partBisPy
, should open after Statement of need. For example, from line 18-19 ("It contains the implementation of the following algorithms:") to line 30 should be appropriate:wave: @jonjoncardoso, please update us on how your review is going (this is an automated reminder).
:wave: @mschordan, please update us on how your review is going (this is an automated reminder).
:wave: @zoometh, please update us on how your review is going (this is an automated reminder).
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@zoometh In the last version I added the section BisPy and slightly modified the Statement of need accordingly. What do you think?
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1007/3-540-44585-4_8 is OK
- 10.1023/A:1027328830731 is OK
- 10.1137/0201010 is OK
- 10.1016/B978-0-12-417750-5.50022-1 is OK
- 10.1137/0216062 is OK
- 10.1007/978-3-540-77050-3_17 is OK
- 10.1016/0898-1221(81)90008-0 is OK
- 10.1016/0890-5401(90)90025-D is OK
MISSING DOIs
- None
INVALID DOIs
- None
@zoometh In the last version I added the section BisPy and slightly modified the Statement of need accordingly. What do you think?
@fAndreuzzi All right, that's fine for me. Great job.
Hi! I haven't been able to properly start reviewing of the paper and code yet but I should do that soon in the next few days :)
Hi @jonjoncardoso and @mschordan, since it has been a while since the last activity on this issue I would like to know if you encountered issues during the installation or usage of the package, or while gathering the information needed for the review. Please, let me know if I can help in some way!
@fAndreuzzi The README.md file states that: compute_maximum_bisimulation(graph, initial_partition=[(0,7,10), (1,2,3,4,5,6,8,9,11,12,13,14)]) should produce: [(7, 10), (8, 9, 11, 12, 13, 14), (3, 4, 5, 6), (1, 2), (0,)] but I get: [(7, 10), (5, 6), (8, 9, 11, 12, 13, 14), (3, 4), (2,), (1,), (0,)] all preceding examples give the expected result.
Thank you very much @mschordan, the right output is indeed the one given by BisPy, I probably confused the outputs while doing copy-paste from the terminal to the browser. I fixed README.md
in my last commit.
@fAndreuzzi I haven't gone through the code yet but some general comments so far:
@jonjoncardoso
some references are also needed to support this sentence about the problem complexity
Sure, I'm still not completely familiar with the rule "cite whenever you say something which is not obvious, I fixed the problem in one of the latest commits.
The description of bisimulation problem is clearer on the paper manuscript version
Could you please point me the parts where I explained the bisimulation better in the paper rather than in the documentation? In my opinion the explanation in the paper is very limited. Anyway I improved a little bit the explanation on ReadTheDocs (cleared some sentences and added a new paragraph about maximum bisimulation). Kindly let me know what you think.
Thank you
:wave: @jonjoncardoso @mschordan Let me know if you need anything to continue making progress on your reviews. Don't forget to update your checklists at the top of this review issue so I can track review progress.
@fAndreuzzi 1) You write: "We found some sparse implementations of the most famous algorithm for the computation of the maximum bisimulation (Paige-Tarjan), however the source code lacked documentation and test cases, and did not seem to be intended for distribution."
This may sound strange to a reader. How did you find it, when it was not intended for distribution? Can you provide a reference? If not, it's not clear why this is mentioned in the first place.
2) You write: "Other algorithms to compute the maximum bisimulation are available, but for what we could see they are slight variations of one of those mentioned above, or tailored on particular cases."
Which algorithms are you referring to? What are "slight variations"? Any algorithms that are referred to should be cited properly.
3) You write: "Our implementations have been tested and documented deeply;" - That's indeed an important aspect as you mention also that "our package may be useful to assess the performance". Some more information to make this point could be interesting to a potential user. What was the biggest input that you tested the algorithm on? Can you provide some runtime measurement?
Hi @mschordan, sorry for the delay but I was a little bit busy this week.
How did you find it, when it was not intended for distribution?
I found code snippets on GitHub, but as I mentioned in the paper they lacked documentation, test cases, and a brief examination of the code made me think that time complexity was not taken into account at all. When I say "not intended for distribution" I mean that those snippets are not meant to be included in someone else's code or used for experiments whose intent is something more than "what is the maximum bisimulation of this graph?". Maybe I should change "distribution" to something else in order to express the meaning in a better way?
Which algorithms are you referring to? What are "slight variations"? Any algorithms that are referred to should be cited properly.
If this sentence causes problems I think I can remove it from the paper. I do not think that it is interesting or meaningful to cite algorithms which are not those that I implemented.
What was the biggest input that you tested the algorithm on? Can you provide some runtime measurement?
Sure, do you think I should rather include runtime measurement in readme.md
or in the paper?
@fAndreuzzi
How did you find it, when it was not intended for distribution?
I found code snippets on GitHub, but as I mentioned in the paper they lacked documentation, test cases, and a brief examination of the code made me think that time complexity was not taken into account at all. When I say "not intended for distribution" I mean that those snippets are not meant to be included in someone else's code or used for experiments whose intent is something more than "what is the maximum bisimulation of this graph?". Maybe I should change "distribution" to something else in order to express the meaning in a better way?
If you want to mention those, then there should also be a link to the respective GitHub repo, otherwise it's not clear what you are referring to. Based on whether you want to provide a reference/link you can decide whether you want to mention it.
Which algorithms are you referring to? What are "slight variations"? Any algorithms that are referred to should be cited properly.
If this sentence causes problems I think I can remove it from the paper. I do not think that it is interesting or meaningful to cite algorithms which are not those that I implemented.
Indeed.
What was the biggest input that you tested the algorithm on? Can you provide some runtime measurement?
Sure, do you think I should rather include runtime measurement in
readme.md
or in the paper?
Where you mention that "our package may be useful to assess the performance", there it would be good to also provide some numbers on the runtime for a given larger input (and which machine it was run on). This would be good to be mentioned in the paper. Also when you mention "tested deeply", it's better to provide some actual info, e.g. how many tests you provide and what your biggest test input was, how the tests were selected, etc.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Submitting author: @fAndreuzzi (Francesco Andreuzzi) Repository: https://github.com/fAndreuzzi/BisPy Version: v0.2.3 Editor: @mjsottile Reviewers: @jonjoncardoso, @mschordan, @zoometh Archive: 10.5281/zenodo.5532910
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@zoometh & @mschordan & @jonjoncardoso, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mjsottile know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @zoometh
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @mschordan
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @jonjoncardoso
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper