openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
717 stars 38 forks source link

[REVIEW]: APGG - A Modular C++ Framework for Asymmetric Public Goods Games #4944

Closed editorialbot closed 1 year ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@jhstaudacher<!--end-author-handle-- (Jochen Staudacher) Repository: https://github.com/APGG-Lab/APGG Branch with paper.md (empty if default branch): Version: v1.1.2 Editor: !--editor-->@Nikoleta-v3<!--end-editor-- Reviewers: @ieyjzhou, @mstimberg Archive: 10.5281/zenodo.8334926

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/4558a91d3a77910764ab49cd2db1a84f"><img src="https://joss.theoj.org/papers/4558a91d3a77910764ab49cd2db1a84f/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/4558a91d3a77910764ab49cd2db1a84f/status.svg)](https://joss.theoj.org/papers/4558a91d3a77910764ab49cd2db1a84f)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ieyjzhou & @mstimberg, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Nikoleta-v3 know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @mstimberg

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.06 s (1156.7 files/s, 68280.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C++                             29            417             95           1506
C/C++ Header                    30            157             11            644
Python                           5            155             28            437
XML                              2              0              0            337
Markdown                         2             42              0            131
TeX                              1              1              0            126
make                             1             18              2             25
-------------------------------------------------------------------------------
SUM:                            70            790            136           3206
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1162/evco.1996.4.2.113 is OK
- 10.1126/science.162.3859.1243 is OK
- 10.1007/s00355-012-0658-2 is OK
- 10.1038/s41598-020-79731-y is OK
- 10.1016/j.plrev.2016.08.015 is OK
- 10.7551/ecal_a_016 is OK
- 10.1038/415137a is OK
- 10.1088/1478-3975/12/4/046005 is OK
- 10.1007/s00265-006-0305-y is OK
- 10.1038/ismej.2017.69 is OK
- 10.13140/RG.2.2.27100.72322/2 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 1 year ago

Wordcount for paper.md is 2049

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Nikoleta-v3 commented 1 year ago

👋🏼 @jhstaudacher, @ieyjzhou, @mstimberg this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist as the top of a new comment in this thread.

These checklists contain the JOSS requirements ✅ As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/4944 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@Nikoleta-v3) if you have any questions/concerns. 😄 🙋🏻

Nikoleta-v3 commented 1 year ago

@jhstaudacher, you might recall that sylvaticus , even though they could not review the submission, left some useful suggestion/comments on https://github.com/openjournals/joss-reviews/issues/4711#issuecomment-1319729427. I am not sure if you have addressed these. Just in case I am pasting them here:

Just a few comments/suggestions to the author package jhstaudacher if I can:

jhstaudacher commented 1 year ago

@nikoleta-v3 Thank you very much for opening a review thread for our submission.

jhstaudacher commented 1 year ago

@ieyjzhou and @mstimberg Thank both very much for your readiness to review our software APGG and our paper.

jhstaudacher commented 1 year ago

@nikoleta-v3 Thank you very much for reminding us of the precious suggestions and comments on our software APGG that sylvaticus provided three weeks ago. We will address these issues in the next release of our software APGG once the reviews are completed. Right now, I feel it is preferable not to change the code while the reviewers are assessing the software. Thus we can avoid confusion and misunderstanding. We will create a new release of APGG -- and revise our paper for JOSS -- addressing all suggestions from the reviewers (including the comments from sylvaticus) after we have the reviews. Thank you very much again for all your efforts.

Nikoleta-v3 commented 1 year ago

Sounds good to me 😄

@ieyjzhou & @mstimberg, please remember to create your checklist by commenting the following:

@editorialbot generate my checklist
mstimberg commented 1 year ago

Review checklist for @mstimberg

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

mstimberg commented 1 year ago

As a reminder: the topic is outside my field of expertise, so I cannot really judge, say, the appropriateness of the methods or the relevance to the field. I will therefore (mostly) constrain my comments on the technical aspects.

I was able to successfully compile and run the software (on Linux). IMO, the C++ code is of good quality and rather well-structured and readable. I agree with the comments by sylvaticus that it would be good to use some API documentation tool and/or unit testing, but from my point of view this is not strictly necessary for the paper/software acceptance into JOSS. However, I feel that there needs to be better step-by-step documentation for how to verify the correct working of the software, e.g. by reproducing Figure 4 of the manuscript. I've opened this and a few other issues directly over in the APGG repository (APGG-Lab/APGG#2, APGG-Lab/APGG#3, APGG-Lab/APGG#4, APGG-Lab/APGG#5, APGG-Lab/APGG#6).

I also have a few comments on the manuscript, I'll directly include them in this comment here:

Minor issues/comments:

ieyjzhou commented 1 year ago

I tested the code on a Windows 10 system by using Visual Studio 2022. The code was successfully compiled and run. Even though the code is well-designed, it still takes a long time to read it. More document needs to be given for helping the reader understand the code. The maximum population should be explained in their paper. This will help the readers for setting their experiments. Usually, the software can not support a very large population. In the flowchart of figure 3, it seems that there is no end up. The authors need to add the stopping conditions in the flowchart.

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher 👋🏻 did you have a chance to look over the reviewers' comments? 😄

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, thank you very much for your message from last Friday, December 16. Yes, I forwarded the very helpful review by Marcel Stimberg (mstimberg) from Friday, December 16, 17:34 (CET), as well as the very helpful comments by Yanjie Zhou (ieyjzhou) from Friday, December 16, 18:09 (CET), to my coauthors and we already discussed the feedback we received. We will be very happy to revise our software APGG and our paper submission accordingly. Still, we are not certain whether the first round of reviews has actually been completed or whether there will be some kind of formal "decision letter" from you as our editor. I am sorry if this question appears obtuse, but for my coauthors and me it is our first submission to JOSS. Thank you very much in advance for your reply and your efforts in editing our submission to JOSS.

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher 👋🏻

I am sorry if this question appears obtuse, but for my coauthors and me it is our first submission to JOSS.

Not a problem at all! 😊

In general you won’t receive any formal decision letter from me or from JOSS. Everything (reviews, acceptance, rejection) will happen on this issue.

The format that JOSS has for the reviews is that reviewers write their comments here, or open issues on the project’s repository, and the authors address them. I won’t intervene, except if I believe an intervention is needed. The reason for this format is to keep the discussion between you and the reviewers ongoing. This way you can have a real conversation with them.

Regarding mstimberg's review: From their checklist (https://github.com/openjournals/joss-reviews/issues/4944#issuecomment-1344428173) there are only a few points that have not been ticked. They have raised their problems with these points by opening issues on https://github.com/APGG-Lab/APGG/issues. I would now start addressing them. Ideally for each issue you can have a different commit or even a pull request if the issue requires a lot of work. If you have any questions you can reply on the issue for clarification. Once you are done addressing the issue, comment on the issue that you are done and ideally also include a link to the commit/pull request.

Regarding ieyjzhou's review: I can see that they have also raised some issues. I would address them and comment here how you resolved them (again with links to commits/pull requests). @ieyjzhou has to generate their checklist.

Once both reviewers have completed their checklist we can move forward. Please let me know if everything makes sense or if you have any further questions!

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3 Thank you very much for your detailed response to my questions. I already forwarded your answers to my coauthors. Currently, I do not have any further questions. Thank you very much again for your support.

jhstaudacher commented 1 year ago

@mstimberg Please allow me to take this chance to thank you very much for your very helpful review from Friday, December 9.

jhstaudacher commented 1 year ago

@ieyjzhou Please allow me to take this chance to thank you very much for your very helpful comments from Friday, December 9.

Nikoleta-v3 commented 1 year ago

Perfect! 😄

Nikoleta-v3 commented 1 year ago

Happy new year everyone 🥳 I hope you had a nice break.

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher 👋🏻 any updates on your side?

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, thank you very much for your message from January 13 and your New Year greetings from January 3. Please allow me to start my reply reciprocating your good wishes for 2023. In terms of the progress on our revision, we already addressed four issues (APGG#3 to APPG#6). Our coauthor Falk Hübner (Venku628) replied to these four issues last Saturday, January 7, and closed them. There is still some work to be done before our resubmission -- in particular, restructuring the documentation for APGG and addressing the points brought up by sylvaticus and ieyjzhou. Of course, we will definitely not forget to address all the "Minor issues/comments" from the review by Marcel Stimberg (mstimberg) and we are also generating an improved version of Figure 4 in our paper. In short: My coauthors and me are half way through and we still need a bit more time to address all the points raised by the reviewers. Thank you very much again for all your efforts in editing our paper.

Nikoleta-v3 commented 1 year ago

Thank you very much @jhstaudacher for the detailed update! Looking forward to seeing the project after the new edits 😄

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher 👋🏻 Any new updates ?

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, thank you very much for your message from Friday, February 17. My coauthors and me are still working on restructuring the documentation (APGG#2) and some seemingly smaller issues. Hopefully, we shall be able to resubmit next month. Thank you very much for your patience and thank you very much again for all your efforts in editing our paper.

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher. I am pinging you because it has been a while since you and your co-authors have been working on the revisions.

The issue with taking a long time is that the reviewers’ job becomes harder. They need to come back to software and remind themselves of everything again.

I would appreciate it if you could finish with the revisions and reply to the reviewers’ comments in the next two weeks.

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, thank you very much for the nudge. We are speeding up our efforts.

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, my apologies for bothering you with a few questions before resubmitting: a) Unless I am mistaken, we are providing all our responses right here, i.e. we do not submit any pdf with our responses. b) Unless I am mistaken, we are allowed to address reviewers directly. c) Is there any specific procedure (or any specific formal requirement) we need to follow and observe during resubmission?

Nikoleta-v3 commented 1 year ago

Hey @jhstaudacher apologises for the delay.

a) Exactly. You can write your response here and ideally you can refer to the comments and the issues that the reviewers left/opened. b) Correct. c) Nothing at all. Once the reviewers have confirmed that they are happy with the manuscript, and they have checked all the boxes in their checklists I will have a final look at the submission and then we can proceed.

Nikoleta-v3 commented 1 year ago

You can always see the latest version of the paper by running:

Nikoleta-v3 commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jhstaudacher commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jhstaudacher commented 1 year ago

Hey @Nikoleta-v3, thank you very much for your very kind responses to my three questions from Thursday night. In particular, thank you very much for encouraging me to use the editorialbot for updating our paper ... I used http://whedon.theoj.org/ previously in order to avoid generating email messages and disturb you. The version I just generated is the one I am now sending -- together with the final draft of the point-by-point responses to the reviewers -- to my coauthors for approval (or last changes). I shall resubmit as soon as I have "green light" by my coauthors. Thank you very much for your patience and all your kindness. Have a wonderful weekend.

jhstaudacher commented 1 year ago

@Nikoleta-v3, thanks a lot for your patience and for editing our paper. With approval from my coauthors, I am hereby resubmitting.

RESPONSE TO THE REVIEWERS:

accompanying the resubmission of the paper

'APGG - A Modular C++ Framework for Asymmetric Public Goods Games'

by Mirko Rosenthal, David J. Richter, Falk Hübner, Jochen Staudacher and Arend Hintze

@mstimberg: Thank you very much again for your review of our paper and your helpful suggestions.

First, for the paper:

As I mentioned earlier, I am not from the field, but I do know the (in)famous "Tragedy of the commons" essay by Hardin. I am sure the authors are aware that the essay is highly problematic, to say the least. As e.g. Matto Mildenberger noted, "[Hardin] was a racist, eugenicist, nativist and Islamaphobe—plus his argument was wrong". I don't have a clear idea of what the best approach here would be, but I strongly feel that citing this paper/author without comment is not appropriate nowadays.

Thank you very much for informing us know about Mildenberger's essay. We rephrased and hope our new formulation is better.

Figure 4 is claimed to replicate Figure 6 from Hintze & Adami (2015), but I have trouble mapping the results between the two figures. Also, shouldn't the "percentile of agents" (rather "percentage"?) for "cooperators" and "punishers" add up to 100? The authors state that "For each datapoint along the axis of the synergy factor 10 replicate experiments were run", but do I understand correctly that the figure only shows the results from a single simulation? Wouldn't it make sense to use e.g. error bars or intervals to use all the information available? Finally, either the manuscript or the documentation should give clear instructions how to reproduce the figure.

We regenerated Figure 4 (including error bars). We are confident to have addressed all your questions -- in particular, how to set parameters to reproduce the figure. As forgenerating the figure itself, we simply use "combineExperiments.py" and "plot.py" as specified in the "Getting started" section of our "README", see also APGG-Lab/APGG#4.

In l.29/30, the references seem to be for the MABE framework. Is this just meant as an example of a modular framework? Or APGG is related to MABE in some way?

Thanks for pointing out this problem. We just meant to give an example of a modular framework. APGG and MABE are not related. We rephrased.

the citations in paper.md should use e.g. [@mcginty2013public,@hintze2020inclusive] instead of [@mcginty2013public],[@hintze2020inclusive] to correctly display multiple citations. Also, try reformatting the reference in l. 130.

Done.

Figure 4: I wouldn't claim that a single successful replication proves that "APGG does work as intended"

Thanks. We rephrased.

l. 103: I think it should be made clearer that itIsNotReallyAGroupLevelPayOffCalculator (what a name!), is an example for an extension, and not part of APGG (in case I understood that correctly, of course).

Our apologies for that glitch. We meant to speak about our GroupLevelPayoffCalculator.

Next, for our improvements on our code:

As for APGG-Lab/APGG#2, APGG-Lab/APGG#3, APGG-Lab/APGG#4, APGG-Lab/APGG#5, APGG-Lab/APGG#6 we worked on all these issues and closed them linking our changes and commit messages.

IMO, the C++ code is of good quality and rather well-structured and readable.

Thanks.

I agree with the comments by sylvaticus that it would be good to use some API documentation tool and/or unit testing, but from my point of view this is not strictly necessary for the paper/software acceptance into JOSS.

No doubts about the usefulness of unit tests in software development in general. As for our software APGG, it is about random simulation and thus unit tests are of rather limited value. Thank you very much for your statement that " ... unit testing was not strictly necessary for the paper/software acceptance into JOSS". We appreciate it.

However, I feel that there needs to be better step-by-step documentation for how to verify the correct working of the software, e.g. by reproducing Figure 4 of the manuscript.

As stated above, we are confident that our new text in lines 130 to 140 together with the caption for Figure 4 will be very helpful to our users.

@ieyjzhou: Thank you very much again for your review of our paper and your helpful suggestions.

More document needs to be given for helping the reader understand the code.

We improved our documentation, see in particular APGG-Lab/APGG#2, APGG-Lab/APGG#4 and APGG-Lab/APGG#5.

The maximum population should be explained in their paper. This will help the readers for setting their experiments. Usually, the software can not support a very large population.

You are right. It was necessary to tell users in more detail how to run experiments. We improved in various places, e.g. via a better documentation for config values (APGG-Lab/APGG#5) and specifications for our experiment in figure 4 of our paper (in lines 130 to 140 together with the caption for figure 4). We do not specify any maximum population (or maximum grid size) in our code. Maybe we are missing your point. Thank you very much for clarification.

In the flowchart of figure 3, it seems that there is no end up. The authors need to add the stopping conditions in the flowchart.

Thank you very much for pointing this out. We generated an improved version of figure 3 with the stopping conditions in the flowchart.

Finally, let us not forget about the comments and suggestions by sylvaticus:

add a couple of sentences after the logo on what your program does;

Done.

create a proper documentation (there is no link to the "our wiki" page referred in the README);

Done.

you shouldn't need to register to download an open source software, check the "release" method;

Done.

linked to the previous point, the code I saw has no comments. In C++ you have tools like Doxygen to help you structure the documentation that you write close to the code as "comment" and automatically extract the API (perhaps even other ones better nowadays). Use them :-) ;

We improved the comments, but we refrain from doing so whenever it is clear from the name of the function what the function does. In our humble view, there is no added value repeating a function name in a "short sentence comment". Rather than using Doxygen, we offer and prefer our (manually generated) Wiki. We hope this is fine.

Nikoleta-v3 commented 1 year ago

👋🏻 @ieyjzhou & @mstimberg, hope both of you are well! Could you please see the author's response to your comments? 😄

mstimberg commented 1 year ago

@Nikoleta-v3 apologies for the radio silence, I did see the author's response, but it took me a while to look into it in detail. Below my updated review:

@jhstaudacher, many thanks for the revision and the detailed reply. In my opinion, the paper and the software documentation have certainly improved. However, I think that a number of points from my previous review still have not been sufficiently addressed.

In particular, it is still unclear to me how to reproduce Figure 4:

We are confident to have addressed all your questions -- in particular, how to set parameters to reproduce the figure. As forgenerating the figure itself, we simply use "combineExperiments.py" and "plot.py" as specified in the "Getting started" section of our "README", see also APGG-Lab/APGG#4.

The "Getting started" section in the README/wiki states to 1) "Adjust the values in the main function of buildConfig.py to whatever values are suitable for your experiment […]" and 2) "Run the buildConfig.py to generate the config.csv." This seems to be very general ("your experiment") and not specific to the example shown in Figure 4. It looks as if the buildConfig.py file contains the settings to reproduce the values for Figure 4 by default, but this should be stated explicitly. If I run the unchanged buildConfig.py, it creates a config.csv file, but trying to run this file with the apgg binary does not work for me, it loads the experiments but then emits the warning [APGG Warning] You requested a non existing config key "timeToFolder". Using default: "" for each of them and does not generate any results. If I run the apgg binary with the config.csv value from the repository (i.e. without running buildConfig.py first), it will run the simulations successfully, but I cannot run combineExperiments.py with the results, since apparently this script also expects a setup.npy file. This file seems to be generated by buildConfig.py, but is not mentioned anywhere. As I stated before in APGG-LAB/APGG#4, it would be important to include at least some details about what these Python scripts do, and which dependencies they need.

Could you also comment a bit on how the Figure 4 in this paper "replicated that exact result" from Hintze & Adami (2015)? Again, I'm not an expert in this domain, but I thought the Figure 4 from the JOSS manuscript was meant to replicate Figure 6 from Hintze & Adami (2015), but this does not seem to be the case (P_C and P_P raise steeply at synergy 4 in Hintze & Adami (2015) while they raise less steeply starting at synergy ~3 in the JOSS manuscript).

Regarding the releases (APGG-Lab/APGG#3), the link in the README now points to GitHub's release section, but these releases are both very old (latest release from Oct 6, 2020), and do not contain any pre-compiled binaries. According to the wiki, building from source is optional, but I don't see any other option at the moment.

Finally, three minor issues:

jhstaudacher commented 1 year ago

@mstimberg Thank you so much for your feedback and your precious suggestions. I already forwarded the update of your review to my coauthors.

ieyjzhou commented 1 year ago

@Nikoleta-v3 All my comments have been well responded. This paper can be accepted.

jhstaudacher commented 1 year ago

@ieyjzhou please allow me to thank you very much again for your readiness to review our paper and software, your precious suggestions as well as your recent approval of our paper.

Nikoleta-v3 commented 1 year ago

Thank you for your time ieyjzhou 🙏🏻

Nikoleta-v3 commented 1 year ago

@mstimberg thank you for thoroughly reviewing the author’s response! @jhstaudacher I would appreciate it if you could address the comments within the next two weeks.

jhstaudacher commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jhstaudacher commented 1 year ago

@Nikoleta-v3 thank you very much again for editing our paper. With approval from my coauthors, I am hereby resubmitting.

RESPONSE TO THE REVIEWER:

accompanying the resubmission of the paper

'APGG - A Modular C++ Framework for Asymmetric Public Goods Games'

by Mirko Rosenthal, David J. Richter, Falk Hübner, Jochen Staudacher and Arend Hintze

@mstimberg Thank you very much again for your second review of our paper from April 6 and your very helpful suggestions and comments.

In particular, it is still unclear to me how to reproduce Figure 4:

The "Getting started" section in the README/wiki states to 1) "Adjust the values in the main function of buildConfig.py to whatever values are suitable for your experiment […]" and 2) "Run the buildConfig.py to generate the config.csv." This seems to be very general ("your experiment") and not specific to the example shown in Figure 4. It looks as if the buildConfig.py file contains the settings to reproduce the values for Figure 4 by default, but this should be stated explicitly. If I run the unchanged buildConfig.py, it creates a config.csv file, but trying to run this file with the apgg binary does not work for me, it loads the experiments but then emits the warning [APGG Warning] You requested a non existing config key "timeToFolder". Using default: "" for each of them and does not generate any results. If I run the apgg binary with the config.csv value from the repository (i.e. without running buildConfig.py first), it will run the simulations successfully, but I cannot run combineExperiments.py with the results, since apparently this script also expects a setup.npy file. This file seems to be generated by buildConfig.py, but is not mentioned anywhere. As I stated before in APGG-Lab/APGG#4, it would be important to include at least some details about what these Python scripts do, and which dependencies they need.

Thank you very much for your precious suggestions and observations. Our wiki now contains both a separate page "Plotting Data" (see https://github.com/APGG-Lab/APGG/wiki/Plotting-Data ) and a separate page "Proof of Concept Replication" (see https://github.com/APGG-Lab/APGG/wiki/Proof-of-Concept-Replication ) with detailed instructions on how to reproduce Figure 4 from our paper.

We also updated the following three pages in our wiki reflecting your suggestions for improvements

https://github.com/APGG-Lab/APGG/wiki/CSV-Batch-Experiments https://github.com/APGG-Lab/APGG/wiki/Run-APGG https://github.com/APGG-Lab/APGG/wiki/Getting-Started

and fixed a problem in our code in the file "Archiver.cpp".

Could you also comment a bit on how the Figure 4 in this paper "replicated that exact result" from Hintze & Adami (2015)? Again, I'm not an expert in this domain, but I thought the Figure 4 from the JOSS manuscript was meant to replicate Figure 6 from Hintze & Adami (2015), but this does not seem to be the case (P_C and P_P raise steeply at synergy 4 in Hintze & Adami (2015) while they raise less steeply starting at synergy ~3 in the JOSS manuscript).

Thank you very much for being so observant. We are very grateful since we should not have written that Figure 4 in our paper was an exact replication of the result from Hintze & Adami (2015). Instead, Figure 4 reproduces the same result qualitatively using a smaller number of replicates (-- the latter also allows for a meaningful visualization of confidence intervals which you suggested in your first review).

Regarding the releases (APGG-Lab/APGG#3), the link in the README now points to GitHub's release section, but these releases are both very old (latest release from Oct 6, 2020), and do not contain any pre-compiled binaries. According to the wiki, building from source is optional, but I don't see any other option at the moment.

We created a new release (1.1.1) of APGG including binaries on April 12.

Finally, let us address the three minor issues:

The wiki on config values has broken/missing links for World (should be World-Class ?) and MatchupGenerator

Thank you very much. We fixed the links.

The two citations in l. 43 should be merged.

Done.

L.99–107: this might be just an issue of formatting, but starting with "There are payoff calculators for different scenarios:" it only introduces a single payoff calculator (asymmetric payoff). The GroupLevelPayoffCalculator (should it be AsymmetricPayoffCalculator as well?) is then described in a new paragraph.

Thank you very much for your observation. We adjusted the formatting.

jhstaudacher commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left: