Closed whedon closed 5 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
Thanks, whedon, the proofs look good after changes I made.
@krueschan : I had a quick look at the paper and the repository, and it looks like this submission only covers the web interface of the project. Can you give us your assessment and justification for why the web-interface component alone can qualify as research software?
@labarba. Thanks for this excellent question. We tried to make this clear in the article, so please let us know if you think this still needs further clarification in the article.
As we write in the article, cerc-web extends the functionality of the "engine" of the calculator quite substantially. In particular, it allows users to compare levels of effort that would be implied by the effort sharing regime based on the settings chosen by the user to reflect their ethical position on one hand, and the level of effort implied by the climate action pledges that countries have actually made on the other. This, in fact, is one of the main ways that the calculator is used in research (about half of the example research pieces that we list at the end have utilized this functionality), and is not available in the "engine" component. (we also mention the API, which also has been used in the past by, albeit now defunct, independent research projects, that arguably would otherwise not have been possible)
Further, users of the calculator typically come from a policy research background and would, in our view, likely not feel comfortable or have the skills to use the calculator without UI. The interactive exploration of implications of different ethical positions and their quantitative representation through the calculator is an important component of research that utilizes the calculator (see Holz, Kartha, Athanasiou 2018), which would not, or only for a very limited user set, be possible without cerc-web.
We also limit the article to the web interface since cerc-web and the calculator "engine" are really quite distinct pieces of software and it would be an odd piece of writing to combine them into one article, besides the "logistical" issues like JOSS, to our knowledge, only supporting one repository link per article etc.
Lastly, we feel that the calculator "engine" is currently sufficiently well-documented (and citeable) through it's technical documentation (Kemp-Benedict at al 2018) and the auto-generated (doxygen) documentation on sourceforge, so we don't think it needs to be covered in a formal publication, while this is not true for the web calculator and its distinct functionality.
OK. This submission can be handled by @kthyng or @jedbrown as editor — may I have one of you volunteer?
Sorry about the mistaken cross-references (too many tabs...). I can edit if needed.
Minor issues I noticed in the paper:
The readme has at least one typo ("slite3" -> "sqlite3").
JOSS publications are normally expected to have test suites, but I don't see tests nor documentation for running and extending them.
I also think the paper and/or prominent documentation needs more explanation regarding research use and extensions. Per the submission requirements:
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@jedbrown, thanks very much for your comments, please find responses below and don't worry about the mistaken cross-references.
* "authors'" is not used as a possessive, should drop the trailing apostrophe. * Bib capitalization such as API and CERC.
The readme has at least one typo ("slite3" -> "sqlite3").
Fixed. Thanks.
* It is unusual for authors to "acknowledge" themselves.
We don't name any of the authors in the Acknowledgements section, though one of the individuals in the acknowledgements (Tyler Kemp-Benedict) has the same surname as one of the authors (Eric Kemp-Benedict), so that's perhaps causing confusion here?
JOSS publications are normally expected to have test suites, but I don't see tests nor documentation for running and extending them.
I have no idea how I would do this for this software. Can you give me a pointer? Or do you think this might not be necessary for this piece of software? In the article, we point to a live installation of the calculator (https://calculator.climateequityreference.org), so perhaps that can serve this purpose?
I also think the paper and/or prominent documentation needs more explanation regarding research use and extensions. Per the submission requirements:
- The software should be a significant contribution to the available open source software that either enables some new research challenges to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler)
We believe that the last two paragraphs of the article already covered that point, but have now expanded on the specific research contributions that the software makes and enables, explicitly highlighting where functionality only offered by cerc-web
has been instrumental in research output. paper.md and the pdf article proof have been updated accordingly.
* The software should be feature complete (no half-baked solutions) and designed for maintainable extension (not one-off modifications). Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable.
We strongly believe that this is not a half-baked solution, which is evidenced by the software having been in use for several years and several research projects, both our own as well as others', based on results obtained with it. We are not sure how to further respond to this item, perhaps you can be a bit more specific if there are concerns in this regard?
Yup, my mix-up on author/acknowledgments.
Testing in headless mode is pretty common practice. https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Headless_mode or https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai. If someone is modifying the source code, how do they know it's still producing correct results? (Also, how does a user audit the methods and interpret the results? Are they expected to "trust" the calculator or to reproduce "interesting" findings in their own statistical environment?)
To help me understand better, of the papers you have cited that use this project, how many involve (a) modifying/extending the code in this repository, (b) running it on different data sets, and/or (c) experimenting with parameters using the calculator? How do those papers explain their methods so as to be reproducible?
Testing in headless mode is pretty common practice. https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Headless_mode or https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai. If someone is modifying the source code, how do they know it's still producing correct results? (Also, how does a user audit the methods and interpret the results? Are they expected to "trust" the calculator or to reproduce "interesting" findings in their own statistical environment?)
Ok, thanks, I'll look into how to write such tests. I have never done that before - can you perhaps point me to a project that has tests similar to the one that you'd expect in our project so I can see how it's done?
To help me understand better, of the papers you have cited that use this project, how many involve (a) modifying/extending the code in this repository, (b) running it on different data sets, and/or (c) experimenting with parameters using the calculator? How do those papers explain their methods so as to be reproducible?
Well, all of these papers use our own installation of the project (as we write in the article, to our knowledge ours is the only current installation of the project in existence). (a) However, as these pieces go back to 2015, some of them have been done with older versions of the code, so in that sense they involve different code. (b) Similarly, we update our own "core database" from time to time to reflect the evolving availability of relevant data (e.g. updated GDP projections or greenhouse gas emissions data), so in that sense the older pieces in the list use a different database than the newer ones. Also, as mentioned in the article, I have been using the project with a custom-created database reflecting Canada's provinces, rather than the world's countries, (though that's not cited there because I didn't end up publishing that work, but we could add the conference paper I gave). Further, in one not-yet-published piece, the authors of that piece have created a database reflecting a hypothetical world with three ideal-typical countries to perform sensitivity experiments. (c) Yes, several of the pieces use different parameter sets from each other, as well as several different parameter sets within the same piece. There is another grey lit piece available that could also be added where we perform extensive sensitivity analyses to explorer how sensitive results are to specific parameter choices. Most of the papers cited describe the calculator methodology in general terms; some of the papers include version numbers for the calculator and core database version used.
I'm not immediately familiar with PHP browser testing, but I'd expect it can use similar methods as Python server-side projects like Jupyter notebook.
Okay for now regarding scope; I think we can solicit reviewer feedback. We don't currently have any registered reviewers who have volunteered expertise in PHP. Can you suggest any?
@jedbrown. I started to look into writing tests for this, but hoping the review can already proceed before I have finished that task? Your last comment sounds like it can.
There are several people listed for PHP in the JOSS reviewers list. Specifically, with PHP as "preferred language" there are @xirdneh, @jaynanavati, @nmullins, and @johnsamuelwrites plus, with PHP as "other language" we've got @linuxscout, @gabrielspadon, @diegoenry, and @MDebasish. I can also try and find somebody else, but perhaps these folks in the JOSS reviewer pool are an appropriate first step?
@krueschan I normally look for both language and subject matter interest, but would welcome any of the PHP programmers tagged above who would volunteer to review, even if narrowly to supplement other subject matter review(s).
I'm late to the party but let me know if you want me to do anything or take over.
@jedbrown , since it seems that you have been doing editor work, I'm going to assign this to you. If you feel it should be @kthyng instead, please discuss it between you and reassign it if that's what you both decide.
@whedon assign @jedbrown as editor
OK, the editor is @jedbrown
@xirdneh @jaynanavati :wave: Would either of you be available to review this submission from the angle of web development and extensibility for new research purposes? Your review would be in addition to a domain expert who is familiar with web development, but has limited resources to evaluate those aspects of this software. I anticipate some conversation in the review thread to help focus your efforts. Thanks in advance.
@krueschan Would you be willing to add an installation option as a Docker container or similar? This would significantly streamline the review and automate what appears to be a complicated installation process for any users who wish to run their own server (even on localhost to inspect different data, etc.).
@jedbrown I have looked into Docker a bit and it seems it could work. alternatively, I'll put together a virtualbox with the software installed for the same purpose.
@krueschan Please let us know if/when a Docker image or VM is available.
@jedbrown yes I will. I have to prioritize other tasks but hopefully will get to it before the end of the month.
@jedbrown I have now created a Docker image. You can fetch it from https://hub.docker.com/r/climateequityreferenceproject/cerc-web but I also have a completely self-contained Dockerfile that's in our project's github repo (sub directory 'docker'), that you could use to docker build
it yourself.
In the context of creating the Docker image, I have found and fixed a few bugs, so I'd like to change the release version for the article to v3.2.1, how would I do that? Thanks.
@whedon set v3.2.1 as version
OK. v3.2.1 is the version.
Cool! I ran
$ docker pull climateequityreferenceproject/cerc-web
$ docker run -it --rm -p 8080:80 climateequityreferenceproject/cerc-web bash
root@c941332d432f:/var/www/cerc-web/public_html# dumb-init /start.sh
and went to localhost:8080
. The first step (starting MySQL) fails if I let it run the default CMD
; I don't know why. I would suggest using a non-privileged port (such as -p 8080:80
) in the instructions because that's what a local user would need; usually one would test that way before deploying anyway.
Ok, I’ll change the instructions. I can’t reliably reproduce the issue you’re describing (MySQL not starting) except that I that happens to me too sometimes (even with well developed docker images such as devilbox) and restarting the docker service fixes it.
I take it that you managed to run it using the commands you listed in your previous post?
Restarting docker.service
doesn't help for me. I don't understand that problem, but running bash and then issuing the one command isn't an issue for the purposes of reviewing. Yes, it worked and appeared to be fully functional when I ran it like that.
@whedon add @mtobis as reviewer
OK, @mtobis is now a reviewer
@whedon start review
OK, I've started the review over in https://github.com/openjournals/joss-reviews/issues/1273. Feel free to close this issue now!
Submitting author: @krueschan (Christian Holz) Repository: https://github.com/climateequityreferenceproject/cerc-web Version: v3.2.1 Editor: @jedbrown Reviewers: @mtobis
Author instructions
Thanks for submitting your paper to JOSS @krueschan. The JOSS editor (shown at the top of this issue) will work with you on this issue to find a reviewer for your submission before creating the main review issue.
@krueschan if you have any suggestions for potential reviewers then please mention them here in this thread. In addition, this list of people have already agreed to review for JOSS and may be suitable for this submission.
Editor instructions
The JOSS submission bot @whedon is here to help you find and assign reviewers and start the main review. To find out what @whedon can do for you type: