openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
714 stars 38 forks source link

[REVIEW]: evoke: A Python package for evolutionary signalling games #6703

Open editorialbot opened 5 months ago

editorialbot commented 5 months ago

Submitting author: !--author-handle-->@stephenfmann<!--end-author-handle-- (Stephen Francis Mann) Repository: https://github.com/signalling-games-org/evoke Branch with paper.md (empty if default branch): Version: v0.1.3 Editor: !--editor-->@drvinceknight<!--end-editor-- Reviewers: @eowagner, @Nikoleta-v3, @igarizio Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/f261bbbeac4db87543d6c2a2dc2e20ba"><img src="https://joss.theoj.org/papers/f261bbbeac4db87543d6c2a2dc2e20ba/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/f261bbbeac4db87543d6c2a2dc2e20ba/status.svg)](https://joss.theoj.org/papers/f261bbbeac4db87543d6c2a2dc2e20ba)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@eowagner & @Nikoleta-v3, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @drvinceknight know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @Nikoleta-v3

📝 Checklist for @igarizio

editorialbot commented 5 months ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 5 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1371/journal.pcbi.1003282 is OK
- 10.1093/acprof:oso/9780199580828.001.0001 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 5 months ago

Software report:

github.com/AlDanial/cloc v 1.90  T=0.06 s (574.3 files/s, 199562.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          17           1882           3551           3556
XML                              1              0            184           1513
Markdown                         2             25              0             70
TeX                              2              3              0             67
reStructuredText                 4             52             34             60
YAML                             2              5              4             33
DOS Batch                        2              8              1             27
TOML                             1              2              0             23
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            32           1981           3781           5358
-------------------------------------------------------------------------------

Commit count by author:

   250  Manolo Martínez
   124  Stephen Mann
editorialbot commented 5 months ago

Paper file info:

📄 Wordcount for paper.md is 383

✅ The paper includes a Statement of need section

editorialbot commented 5 months ago

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

editorialbot commented 5 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

danielskatz commented 4 months ago

👋 @drvinceknight, @eowagner, @Nikoleta-v3 - it looks like this review started about a month ago, but nothing has happened since then. Is there anything I can do as track editor to help?

Nikoleta-v3 commented 4 months ago

Thank you for the nudge @danielskatz! I will start my review this week

Nikoleta-v3 commented 4 months ago

Review checklist for @Nikoleta-v3

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Nikoleta-v3 commented 3 months ago

Apologies for the delay in my review; the past few weeks have been very busy. Anyway, I had some time to go over the submission now, and I have a few comments. I have some minor comments, but also a major concern about the package. This concern regards the documentation of the package and its overall aim.

Reading through the documentation and the paper, it feels that the aim of the package is to replicate two pieces of work: Skyrms 2010 and Martinez 2013. This is a very limited usage of a package.

In the current documentation, there is no explanation of what signaling games are, and instructions on how to create your own signaling games appear only in the later section of the interactive tutorial. Let’s assume that the target audience consists of people familiar with signaling games and the evolutionary algorithms you are using. The first things that show in the tutorials and documentation are how to replicate the two prior works. However, even here there is no explanation given; the functions are even named after the figure numbers. For example, Skyrms2010_5_2() replicates figure 5.2. What if I have no idea what figure 5.2 of the paper is?

The package should serve as a tool for creating signaling games and running evolutionary simulations. The secondary aim of the package would be replicating current pieces of work. I am not saying that these should not be mentioned or part of the documentation; please see my suggestions in the Documentation section.

Let me break down my comments in the following sections:

Paper

I find the paper lacking because of what I have already mentioned. The scope of the package is too narrow. In the summary, it is even stated that “More examples from well-known books and papers will be added”

The package should be about what your second sentence says: creating both reconstructions of models from the literature and their own models and results. It’s just that there is no documentation on how to do this.

There are several packages on game theory, for example, see: https://egttools.readthedocs.io/en/latest/tutorials.html. However, there are no packages (to my knowledge) on signaling games. So do I believe there is a need for such a package. That being said maybe consider citing some of there other packages? This is another well know Python package for example: https://nashpy.readthedocs.io/en/stable/#.

Documentation

Currently, there are the online material, the online document, and the examples folder, and all cover different things. I like the interactive documentation, but these sections should also be in the online documentation.

The online document should have sections that describe signaling games and how to create them, the description of the processes, and how to run them. Then follow with the plotting. Replicating prior literature should have their own sections, that is okay, but you also need to describe what those papers were doing, not just say here you can get figure 1.1.

I know that in the evoke/examples scripts, the docstrings of the classes have the information, and they also appear in the examples: https://evoke.readthedocs.io/en/latest/examples.html. But it’s information that the user has to go look for.

I suggest that you focus on the Sphinx online document and then also have the interactive material that covers some functionality of the package.

Minor

Tests

Regarding your tests, I managed to run all of them and they passed. However, what are you testing? All the tests are just testing whether something is an instance. For example, consider the class OnePop you could test that all the input arguments are passed correctly.

However, since tests are not required for a JOSS publication, this is a recommendation.

I am requesting a lot of work. I believe that such a package is missing from the literature, but I find the package to be a bit weak in its current form.

Nikoleta-v3 commented 3 months ago

@stephenfmann 👋🏻 thank you for your patience! Please find my review here 🆙

stephenfmann commented 3 months ago

Thanks, that all sounds sensible. We were assuming quite a bit of background knowledge, and we will update the documentation and the paper so as not to assume so much.

Re: testing, this is something I don't have a lot of experience with, and so it's not clear to me what exactly to test other than that the object was created correctly (hence the checking for instances). When you say "you could test that all the input arguments are passed correctly", do you mean to create the object and then check that the properties of that object are equal to the values that were passed into it? I don't really have an intuition as to what makes a good test, but maybe I can read up on that.

Re: the figures, I'm concerned about copyright issues, so I don't want to actually depict the figure from the book. This is annoying, but it's a consequence of the fact that the Skyrms book is the classic text in this area, and so we chose to focus on it for the main example.

When you say: "The package should be about what your second sentence says: creating both reconstructions of models from the literature and their own models and results. It’s just that there is no documentation on how to do this." It's in the interactive tutorial, but I guess you mean it should be in the ReadTheDocs pages?

When you say: "Currently, there are the online material, the online document, and the examples folder, and all cover different things." The examples folder has to be more extensive than the other two, because it's where examples from the literature go. For the other two, if by "online material" you mean the interactive tutorial, it covers the basics whereas the documentation is supposed to cover everything in detail. Or does "the online document" mean the interactive tutorial? I'm a bit confused.

Anyway, I plan to action all your comments, so thank you very much for your efforts!

Nikoleta-v3 commented 3 months ago

Thank you for your reply and questions! Apologies that my comments were not always clear.

Question 1

Re: testing, this is something I don't have a lot of experience with, and so it's not clear to me what exactly to test other than that the object was created correctly (hence the checking for instances). When you say "you could test that all the input arguments are passed correctly", do you mean to create the object and then check that the properties of that object are equal to the values that were passed into it? I don't really have an intuition as to what makes a good test, but maybe I can read up on that.

That’s absolutely understandable. Here is a link to a tutorial, though it also covers very basic things: https://www.dataquest.io/blog/unit-tests-python/. The idea is that if your class has a method, then you can have one test function for each of the methods. So you would also test the init function by checking that the input parameters have been passed correctly. In this case, I realize it might not be too important.

But one other example is: you are using replicator dynamics, so a test could ensure that the method not only doesn't crash but also checks that the sum of the output vector is always close to 1.

Question 2

Re: the figures, I'm concerned about copyright issues, so I don't want to actually depict the figure from the book. This is annoying, but it's a consequence of the fact that the Skyrms book is the classic text in this area, and so we chose to focus on it for the main example.

I believe that it’s okay not to show the pictures of the figures. However, for example, in the README, you have this code and generate the following figure,

Screenshot 2024-06-18 at 08 45 47

but without any accompanying text that explains what the user is seeing. This information exists in the docstring of the function.

Question 3

When you say: "The package should be about what your second sentence says: creating both reconstructions of models from the literature and their own models and results. It’s just that there is no documentation on how to do this." It's in the interactive tutorial, but I guess you mean it should be in the ReadTheDocs pages?

Yes, I have seen the interactive tutorial. I guess I mean two things: yes, it should be on the ReadTheDocs pages. Also, I think it should be the first information the reader sees. As a user, I want like to create my own game and run my own evolutionary simulation.

Question 4

When you say: "Currently, there are the online material, the online document, and the examples folder, and all cover different things." The examples folder has to be more extensive than the other two, because it's where examples from the literature go. For the other two, if by "online material" you mean the interactive tutorial, it covers the basics whereas the documentation is supposed to cover everything in detail. Or does "the online document" mean the interactive tutorial? I'm a bit confused.

My apologies if this is indeed confusing! There is the readthedocs documentation, the folder with the examples, and the interactive tutorial.

My suggestion is that the readthedocs documentation should be the most detailed. It should have sections on:

Then, have the interactive tutorial with a few examples of how you can use the package.

stephenfmann commented 3 months ago

That's super clear, thank you. I will action all of these comments. It might take a while as I'm unusually busy until late July. But I will get to it, so thanks for taking the time.

Nikoleta-v3 commented 3 months ago

Thank you! Good luck with the busy month 🤞🏻

stephenfmann commented 2 months ago

@editorialbot generate pdf

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

stephenfmann commented 2 months ago

@editorialbot generate pdf

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

stephenfmann commented 2 months ago

@editorialbot generate pdf

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

stephenfmann commented 2 months ago

Thanks @Nikoleta-v3 for your patience. We have now addressed all of the issues discussed above. I'll list the changes in response to the original review.

In the current documentation, there is no explanation of what signaling games are, and instructions on how to create your own signaling games appear only in the later section of the interactive tutorial. Let’s assume that the target audience consists of people familiar with signaling games and the evolutionary algorithms you are using. The first things that show in the tutorials and documentation are how to replicate the two prior works. However, even here there is no explanation given; the functions are even named after the figure numbers. For example, Skyrms2010_5_2() replicates figure 5.2. What if I have no idea what figure 5.2 of the paper is?

The documentation has been heavily revised. The Introduction describes both games in general and signalling games in particular, and details the components of a signalling game along with code snippets. The Usage page gives a full demonstration of creating and running a signalling game simulation, with code snippets taken from the interactive tutorial. Each time a figure appears, its meaning is explained so that the reader understands what they are looking at.

The package should serve as a tool for creating signaling games and running evolutionary simulations. The secondary aim of the package would be replicating current pieces of work. I am not saying that these should not be mentioned or part of the documentation; please see my suggestions in the Documentation section.

The emphasis of the documentation, paper, tutorial and README are now on creating signalling games and running evolutionary simulations. In each of those four documents, information about creating and simulating signalling games comes before details of replicating existing figures.

I find the paper lacking because of what I have already mentioned. The scope of the package is too narrow. In the summary, it is even stated that “More examples from well-known books and papers will be added” The package should be about what your second sentence says: creating both reconstructions of models from the literature and their own models and results. It’s just that there is no documentation on how to do this.

The paper now emphasises creating and simulating a user's own games, while still mentioning the possibility of recreating figures from the literature. As mentioned above, the documentation now goes into detail about how to create and simulate custom games.

There are several packages on game theory, for example, see: https://egttools.readthedocs.io/en/latest/tutorials.html. However, there are no packages (to my knowledge) on signaling games. So do I believe there is a need for such a package. That being said maybe consider citing some of there other packages? This is another well know Python package for example: https://nashpy.readthedocs.io/en/stable/#.

The paper now references both Nashpy and EGTtools, while emphasising that the focus on signalling games is what is novel about this package.

Currently, there are the online material, the online document, and the examples folder, and all cover different things. I like the interactive documentation, but these sections should also be in the online documentation.

All of the key information from the tutorial now appears in the documentation too.

The online document should have sections that describe signaling games and how to create them, the description of the processes, and how to run them. Then follow with the plotting. Replicating prior literature should have their own sections, that is okay, but you also need to describe what those papers were doing, not just say here you can get figure 1.1.

The documentation now extensively describes signalling games and how they are represented in Evoke. The figure example is described in detail, and a link to the relevant docstring is included.

I know that in the evoke/examples scripts, the docstrings of the classes have the information, and they also appear in the examples: https://evoke.readthedocs.io/en/latest/examples.html. But it’s information that the user has to go look for.

As mentioned, the docstring of the figure in the documentation is directly linked. The figures in the tutorial and README are now explained in detail (i.e. the information in the docstring is reproduced, though not necessarily verbatim).

I suggest that you focus on the Sphinx online document and then also have the interactive material that covers some functionality of the package.

We have followed this advice in the new version. The Sphinx documentation (hosted at ReadTheDocs) is now much more extensive.

Python versions: The package requires Python greater than 3.8. That is documented on PyPI; maybe include something on the repository too? Are you familiar with badges?

We have added the Python version requirement to the text of the README along with a link to pyproject.toml which lists other requirements. We have also added a badge to the top of the README. The next release will be compatible with Python 3.12; because the badge is linked to PyPI, it will update only when the new release is uploaded, which we plan to do after the review phase is completed.

Question: I see you have a license file, but it is called COPYING. How come? It’s the first time I've seen this name; it’s usually called LICENSE.

We have changed the filename to LICENSE. The reason it was COPYING was that that is the name recommended by the Free Software Foundation.

Please add a contributions file. In the interactive material, you have “Pull request: When you've tested and everything looks good, create a pull request from your fork.” However, this cannot be found anywhere else, like in the repository or the online documentation. Please include your contribution line in the README as well. Also, please consider having more detailed guidelines. See examples: https://gitlab.com/remram44/taguette/-/blob/master/CONTRIBUTING.md?ref_type=heads.

We have added a CONTRIBUTING.md (modelled on the linked file) and we link to it in the README.

Could you also include some details on how to run your tests? If you choose to have more detailed guidelines for contributing, as you can see in the example, this running tests section is commonly included there.

In CONTRIBUTING.md we have added a section on running tests.

Regarding your tests, I managed to run all of them and they passed. However, what are you testing? All the tests are just testing whether something is an instance. For example, consider the class OnePop you could test that all the input arguments are passed correctly. However, since tests are not required for a JOSS publication, this is a recommendation.

We have added much more detail to test_evolve.py and plan to update the other tests too (as you say, this is not a strict requirement so we hope it isn't a showstopper). In fact, in updating the tests we found two minor bugs, so thank you!

Thanks so much for your careful review. We hope we have addressed all the issues; if not we will take another look at anything that is still amiss.

Nikoleta-v3 commented 2 months ago

Thank you very much @stephenfmann! I am currently on holidays 🏖️I will have a look next week.

Nikoleta-v3 commented 1 month ago

Hello @stephenfmann 👋🏻

Thank you very much for addressing my comments! Both the documentation and the paper have improved significantly.

We have added the Python version requirement to the README along with a link to pyproject.toml, which lists other requirements. We have also added a badge to the top of the README. The next release will be compatible with Python 3.12; because the badge is linked to PyPI, it will update only when the new release is uploaded, which we plan to do after the review phase is completed.

Amazing! and the badge looks great on the README 😄

We have changed the filename to LICENSE. The reason it was previously named COPYING is that this is the name recommended by the Free Software Foundation.

Interesting! I hadn’t seen that before.

We have added a CONTRIBUTING.md (modeled on the linked file) and we link to it in the README.

The contributing file looks great, especially the "How to Run Tests" section! 👍🏻

We have added much more detail to test_evolve.py and plan to update the other tests too (as you mentioned, this is not a strict requirement, so we hope it isn't a showstopper). In fact, while updating the tests, we found two minor bugs, so thank you!

Haha, I’m glad it was helpful!

Regarding the paper I have 2️⃣ last minor comments. Apart from that @drvinceknight I’m happy to approve the paper for publication! 🚢

Comments:

(-) Line 17 should be game dynamics? (-) Line 21-23. It sounds like there could have been more examples from books included, but they weren't. Maybe you could rephrase it to say that users can easily contribute to the library by adding examples from literature, making it a useful educational tool. At the same time, they can create and analyze their own models, turning it into a great research resource.

stephenfmann commented 1 month ago

@editorialbot generate pdf

editorialbot commented 1 month ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

stephenfmann commented 1 month ago

Thanks so much @Nikoleta-v3 ! For the sentence "Evoke offers methods for both kinds of game dynamic", here the singular "game dynamic" I think is correct. We are saying that reinforcement learning is one dynamic and evolution is a different dynamic. It's a bit weird but people do sometimes use "dynamic" in the singular like this. (However, if you happen to know that vastly many more game theorists would use "dynamics" in this context, I'll be happy to change it.)

For lines 21-23 I have rephrased these sentences. The text now reads: "Users can contribute to the library by adding further examples from the literature. This can be a useful way to become familiar with Evoke, while at the same time increasing the benefit to other users. Evoke can therefore serve as an educational tool (encouraging understanding of existing literature) and a research resource (promoting good practice and effective modelling techniques)."

The new paper is here. Hope this is satisfactory!

drvinceknight commented 1 month ago

@eowagner could you let me know how your review here is going? If there's anything I can help with let me know.

manolomartinez commented 3 weeks ago

In case Elliott is too busy for this, how should we move forward? Should we suggest names of other possible reviewers?

stephenfmann commented 1 week ago

Hi @danielskatz @drvinceknight how can we help you push this forward? Other potential reviewers from the JOSS list might be @marimeireles, @mwt and @igarizio.

marimeireles commented 1 week ago

Hi @stephenfmann I'm happy to help. Let me know if it's okay for me to review this.

stephenfmann commented 1 week ago

Thanks so much! @drvinceknight can we add @marimeireles as a reviewer?

danielskatz commented 1 week ago

👋 @drvinceknight - from the thumbs up above, perhaps @igarizio is another potential reviewer.

igarizio commented 1 week ago

Yes, happy to help too :)

drvinceknight commented 1 week ago

@editorialbot add @igarizio as reviewer

editorialbot commented 1 week ago

@igarizio added to the reviewers list!

igarizio commented 6 days ago

Review checklist for @igarizio

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper