Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.41 s (935.5 files/s, 190431.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
SVG 6 8 6 41535
Python 52 1704 16088 11024
reStructuredText 316 3457 2895 1053
TeX 1 8 0 101
Markdown 2 42 0 81
YAML 2 8 4 52
DOS Batch 1 8 1 26
make 1 4 7 9
CSS 2 7 27 6
TOML 1 0 0 3
-------------------------------------------------------------------------------
SUM: 384 5246 19028 53890
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 631
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.cma.2016.04.004 is OK
- 10.1016/j.cma.2017.03.037 is OK
- 10.1016/j.cma.2022.114726 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @BernardoFerreira, very interesting project and good documentation. Since the check-list requires it, I think you did not add the statement of need to your Docs. You have one in the paper, though. I think it becomes clear what your Python package is used for, if one has the theoretical background. Could you please add an explicit statement of need to your Docs also? If I am correct, you do not have any automated tests in the Github workflow. I've only started to look at the examples (benchmarks) that you provide in your repository, so I haven't found any comparisons with analytical results or some sort of verification of the given results. ~Could you point them out please?~ Edit: I found the benchmark comparison in your dissertation pp. 174 ff., which provide a verification of the implemented methods.
Nevertheless, there seems to be no automated testing in the workflow. @Kevin-Mattheus-Moerman, how is this usually judged?
Hello @Extraweich,
Thank you for your kind words and for taking the time to review this project!
Concerning the points you raised:
Statement of Need:
Automated tests:
Thank you for updating the statement of need, @BernardoFerreira. This is fully satisfactionary to me. Concerning the automated testing, I will wait for what @Kevin-Mattheus-Moerman has to say, since it is the first time I review for JOSS. I would say that I agree with your statement that analytical results are rare and I would think that the benchmark comparisons with DNS results are sufficient.
Another point on my list concerning "State of the field: Do the authors describe how this software compares to other commonly-used packages?":
Your paper does not really compare the functionality of CRATE with other software packages out there. This may be due to the fact, that your SCA and ASCA approaches seem to be novel within the open source community. A quick search on the internet brought FFTHomPy and fibergen to my attention. They at least share the FFT-based solution of the Lippmann-Schwinger equation with your package. I think it would be worth mentioning a few words about related packages (e.g. the packages mentioned above) and highlight what these packages do not offer, but CRATE offers.
Lastly "Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support":
On your Git repository you clearly state how to report issues with the software. Is it intentional that you do not want to invite others to contribute? If not, you could quickly add a "CONTRIBUTING.md" to your repository. You can check my repository if you need an example.
:)
@BernardoFerreira on automated testing. It is not a strict requirement to have fully automated testing but it is highly encouraged. What is required at a minimum is that a user can run a testing suite (e.g. manually) to compare to known results. In this case though I would encourage you to explore adding automated unit tests for instance.
You mention that comparison to analytical tests may be challenging. However, it should be relatively straightforward for you to add some sort of verification and/or validation-type tests, which are usually “integration” tests because they test the overall behavior of the software, the typical sort of automated tests that are standard in Python packages (and of course other languages) are unit tests, which confirm that individual functions are working properly.
So although I agree a full "test" e.g. of something like an entire CFD or FEA package may be challenging, it should be much easier to confirm that individual functions are working properly given various known inputs.
So, given the above, I would encourage you to include some form of automated functionality testing / unit tests.
Here are some resources our board members suggested when they helped me formulate this response: https://github.com/jbusecke/cookiecutter-science-project https://realpython.com/python-testing/ https://caam37830.github.io/book/09_computing/unittest.html https://coderefinery.github.io/testing/
@Extraweich perhaps you can comment on the nature of the benchmarking comparisons you mentioned. Are these manual evaluations for large functional blocks (combinations) of code? Do you agree that additional and automated testing of the behaviour of functional units, as I allude to above, should still be achievable in this case?
The package offers a fast way of calculating homogenization results in inhomogeneous materials using FFT-based methods. The benchmarks compare these results with more classical approaches, such as FEM calculations (direct numerical simulations -> DNS). Personally, I think enough comparisons are given to proof sufficient agreement with the DNS results. Nevertheless, I think that @Kevin-Mattheus-Moerman has a valid point in implementing some unit tests. @BernardoFerreira, you could use a couple of your major functions and compare them against known results. I did not scan your code completely, but if you have transforming/back-transforming functions (e.g. time domain -> frequency domain -> time domain), it would be a good start to test them for consistency, e.g. $f^{-1}(f(a))\overset{!}{=}a$.
Hello @Extraweich,
Concerning the points you raised:
Comparison with other packages:
Contributing to the software:
Automated testing (also @Kevin-Mattheus-Moerman):
@Extraweich, once again thank you for taking your time to review this project. Please let me know your thoughts on the points above!
@BernardoFerreira:
Comparison with other packages: In my opinion, it would still be good to mention at least a few relevant packages in the paper, even if their scopes are somewhat different. I would say that the scope (homogenization in materials science / engineering) and the mathematics behind it (FFT-based solution of the Lippmann-Schwinger equation) are similar enough to warrant a comparison. When comparing these packages, it would be a good idea to highlight how CRATE is different (clustering-based model reduction) and what this functionality allows the user to do that they could not do with other packages. Would you agree with me on this point, or am I wrong in assuming that the packages have great similarities up to a point?
Contributing to the software: I understand that it is a bit difficult to contribute to your software, but I would still give clear instructions to interested users on how to contribute. So if you would add a CONTRIBUTING.md I would be fully satisfied. :)
Automated testing: I also see that the functionality of your code is complex, which makes unit testing a bit difficult. Since @Kevin-Mattheus-Moerman stated, that automated testing is not a strict necessity and given the benchmarks in your repository and your PHD thesis, I would check the given point from my list. I would encourage you to add some unit tests in the future, though.
Another thought that came to mind and has nothing to do with the publishing process here. I personally am not very familiar with the methods used in CRATE. I have some basic knowledge of FFT-based solutions to the Lippmann-Schwinger equation, since Matti Schneider (whom you cite once or twice in your dissertation) is remotely involved in my own dissertation project. Nevertheless, I found it somewhat tedious to fully grasp the potential of CRATE. That's because it's certainly a complicated topic. You give a lot of well-chosen benchmarks, but they kind of run themselves, and the user might feel a bit overwhelmed by the abundance of results. What do you think about adding a Jupyter notebook with a very simple step-by-step example to give the user the opportunity to more easily understand how she/he can incorporate CRATE's functionality into her/his specific research challenges?
@Extraweich :
Thank you for your comments and suggestions!
Comparison with other packages:
Contributing to the software:
Automated testing:
Regarding your suggestion:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I have checked off all the items from the list. Thank you for your cooperation and for developing and providing CRATE to the open source community @BernardoFerreira.
Thank you for your review and for helping me improving this project @Extraweich!
@RahulSundar, @atzberg, @kingyin3613 can you please provide an update on review progress? Please tick any boxes you think can be ticked. If some cannot be ticked yet please give the authors some comments here, or in dedicated issues on their software repo, on what actions should be taken. Thanks!
@atzberg, are you able to get started? You can call @editorialbot generate my checklist
here to get set-up. Thanks.
Yes, I have started reviewing. Thanks for the info on the command for interactive checklist.
On Tue, Jul 11, 2023 at 5:24 AM Kevin Mattheus Moerman < @.***> wrote:
@atzberg https://github.com/atzberg, are you able to get started? You can call @editorialbot generate my checklist here to get set-up. Thanks.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5594#issuecomment-1630729464, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZTT7WR32VL32T2ERZ3XPVAYBANCNFSM6AAAAAAZUWXI4Q . You are receiving this because you were mentioned.Message ID: @.***>
@atzberg it says Email replies do not support Markdown
, if you instead call @editorialbot generate my checklist
here in a comment you get a proper list.
@openjournals/dev how do you suggest to solve this "email replies" issue? Do we just repeat the call in a comment here? Seems odd that it doesn't re-interpret the markdown. Should be a GitHub option to say, re-interpret.
Yes, it looks like GitHub won't allow markdown in from-email comments even if edited later here. I guess for now our only option is to delete that comment and then @atzberg needs to repeat the call from GitHub instead of from an email.
Thanks for bringing that to my attention.
@RahulSundar, @atzberg, @kingyin3613 can you please provide an update on review progress? Please tick any boxes you think can be ticked. If some cannot be ticked yet please give the authors some comments here, or in dedicated issues on their software repo, on what actions should be taken. Thanks!
Yes, I have updated my responses in the checklist. I shall get back on the remaining items and some comments within the next week.
Thanks Rahul
Hi @BernardoFerreira , I went through the paper, code and documentation, lovely package and well-written documentation! Everything went smoothly, except when I was running the benchmark, there was an encoding issue, I attached details in https://github.com/bessagroup/CRATE/issues/10 . I'm looking for your solutions!
I have now updated the checklist. Overall, it is a well-organized and well-documented software package.
[minor item] The CRATE logo image has black text on the homepage README.md that does not display on github when using the "dark theme." Maybe change the black text to grey or outline in white or use white background, so it is visible in the "dark theme." Overall, a nicely organized and documented package.
Hi @BernardoFerreira , I went through the paper, code and documentation, lovely package and well-written documentation! Everything went smoothly, except when I was running the benchmark, there was an encoding issue, I attached details in bessagroup/CRATE#10 . I'm looking for your solutions!
Hello @kingyin3613!
Thank you for your kind words and for taking your time to review this project!
I've replied to your issue and attempted to fix the issue (displaying colors and greek letters on terminal). I hope that this solves the problem and that you can run the benchmarks successfully - let me know if that is not the case!
I have now updated the checklist. Overall, it is a well-organized and well-documented software package.
Hello @atzberg,
Thank you for your appreciation and for taking your time to review CRATE!
[minor item] The CRATE logo image has black text on the homepage README.md that does not display on github when using the "dark theme." Maybe change the black text to grey or outline in white or use white background, so it is visible in the "dark theme." Overall, a nicely organized and documented package.
Thank you for pointing this out! I have fixed this problem by adding the logo with the white background (compatible with both light/dark themes).
P.S. I actually have a CRATE logo version for black background as well, but it doesn't seem straightforward to set a different logo to be displayed according to your theme on GitHub
Great, this item looks good here!
On Mon, Jul 17, 2023, 12:27 PM Bernardo Ferreira @.***> wrote:
[minor item] The CRATE logo image has black text on the homepage README.md that does not display on github when using the "dark theme." Maybe change the black text to grey or outline in white or use white background, so it is visible in the "dark theme." Overall, a nicely organized and documented package.
Thank you for pointing this out! I have fixed this problem by adding the logo with the white background (compatible with both light/dark themes).
P.S. I actually have a CRATE logo version for black background as well, but it doesn't seem straightforward to set a different logo to be displayed according to your theme on GitHub
[image: crate_logo_black] https://user-images.githubusercontent.com/25851824/254048754-f423ec13-e448-4769-a1da-0f7740ddcf18.png
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5594#issuecomment-1638743068, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZVQOM26OHSPZFFV4BTXQWG3FANCNFSM6AAAAAAZUWXI4Q . You are receiving this because you were mentioned.Message ID: @.***>
Hi @BernardoFerreira , I went through the paper, code and documentation, lovely package and well-written documentation! Everything went smoothly, except when I was running the benchmark, there was an encoding issue, I attached details in bessagroup/CRATE#10 . I'm looking for your solutions!
Hello @kingyin3613!
Thank you for your kind words and for taking your time to review this project!
I've replied to your issue and attempted to fix the issue (displaying colors and greek letters on terminal). I hope that this solves the problem and that you can run the benchmarks successfully - let me know if that is not the case!
Hi @BernardoFerreira , I could successfully run all the benchmarks with the latest version of CRATE, although there was still a minor display issue on Windows OS (or probably only on my Windows system), you may want to fix it in the next few versions. Overall, CRATE has already met all my requirements as a mature and well-documented open-source software, I recommend this package, for publication on JOSS. Thanks for your quick responses to my issues and your contribution to the open-source software community!
Hi @BernardoFerreira, Thanks for fixing items. The package also looks good to accept. I recommend for publication on JOSS.
[minor item] The CRATE logo image has black text on the homepage README.md that does not display on github when using the "dark theme." Maybe change the black text to grey or outline in white or use white background, so it is visible in the "dark theme." Overall, a nicely organized and documented package.
Thank you for pointing this out! I have fixed this problem by adding the logo with the white background (compatible with both light/dark themes).
P.S. I actually have a CRATE logo version for black background as well, but it doesn't seem straightforward to set a different logo to be displayed according to your theme on GitHub
Hi @BernardoFerreira, in case you have not found this functionality and are interested in using it, Markdown on GitHub can do this. Check out the raw README.md in one of my repositories (I would post the code snippet here, but then it will be rendered, which is not my intention...). You will have to be careful when uploading such a README.md to pages like pypi.org, though, because general Markdown does not support this feature and you'll see some artefacts (see this example).
Hi @BernardoFerreira , I went through the paper, code and documentation, lovely package and well-written documentation! Everything went smoothly, except when I was running the benchmark, there was an encoding issue, I attached details in bessagroup/CRATE#10 . I'm looking for your solutions!
Hello @kingyin3613! Thank you for your kind words and for taking your time to review this project! I've replied to your issue and attempted to fix the issue (displaying colors and greek letters on terminal). I hope that this solves the problem and that you can run the benchmarks successfully - let me know if that is not the case!
Hi @BernardoFerreira , I could successfully run all the benchmarks with the latest version of CRATE, although there was still a minor display issue on Windows OS (or probably only on my Windows system), you may want to fix it in the next few versions. Overall, CRATE has already met all my requirements as a mature and well-documented open-source software, I recommend this package, for publication on JOSS. Thanks for your quick responses to my issues and your contribution to the open-source software community!
Thank you once again @kingyin3613!
Hi @BernardoFerreira, Thanks for fixing items. The package also looks good to accept. I recommend for publication on JOSS.
Thank you for your review @atzberg !
[minor item] The CRATE logo image has black text on the homepage README.md that does not display on github when using the "dark theme." Maybe change the black text to grey or outline in white or use white background, so it is visible in the "dark theme." Overall, a nicely organized and documented package.
Thank you for pointing this out! I have fixed this problem by adding the logo with the white background (compatible with both light/dark themes). P.S. I actually have a CRATE logo version for black background as well, but it doesn't seem straightforward to set a different logo to be displayed according to your theme on GitHub
Hi @BernardoFerreira, in case you have not found this functionality and are interested in using it, Markdown on GitHub can do this. Check out the raw README.md in one of my repositories (I would post the code snippet here, but then it will be rendered, which is not my intention...). You will have to be careful when uploading such a README.md to pages like pypi.org, though, because general Markdown does not support this feature and you'll see some artefacts (see this example).
Hello @Extraweich, thank you for sharing this, I will take a look into your repository as an example!
I have updated all the items in the checklist. I went through the paper, code and the documentation. I didn't face any issues while running the benchmark cases and the documentation at the current state is done well @BernardoFerreira.
Hello @RahulSundar! Thank you for taking your time to review this project and for your appreciation. I’m glad that you could ran the benchmarks without facing any issues!
@Kevin-Mattheus-Moerman, I believe that all the reviewers have now completed their review process and checked all the requirements. Please let me know what are the next steps.
@atzberg please can you tick the last (functionality) box as well, if you feel this is ready to be accepted.
@BernardoFerreira thanks for the ping. I'll check the paper/references etc and will get started on next steps, however, it would be great if @atzberg could finish up the above too.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Submitting author: !--author-handle-->@BernardoFerreira<!--end-author-handle-- (Bernardo P. Ferreira) Repository: https://github.com/bessagroup/CRATE Branch with paper.md (empty if default branch): master Version: v1.0.3 Editor: !--editor-->@Kevin-Mattheus-Moerman<!--end-editor-- Reviewers: @RahulSundar, @atzberg, @Extraweich, @kingyin3613 Archive: 10.5281/zenodo.8199879
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@RahulSundar & @atzberg & @Extraweich & @kingyin3613, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Kevin-Mattheus-Moerman know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @RahulSundar
📝 Checklist for @atzberg
📝 Checklist for @Extraweich
📝 Checklist for @kingyin3613