Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @iammix, @sigvaldm it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Wordcount for paper.md
is 514
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1145/1039813.1039820 is OK
- 10.7717/peerj-cs.103 is OK
- 10.11588/ans.2015.100.20553 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.12 s (895.8 files/s, 81881.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 82 1500 855 6278
Markdown 8 89 0 372
YAML 7 21 8 276
reStructuredText 4 77 73 76
TeX 1 3 0 37
make 1 4 6 9
SVG 2 0 0 4
JSON 1 0 0 1
-------------------------------------------------------------------------------
SUM: 106 1694 942 7053
-------------------------------------------------------------------------------
Statistical information for the repository '6b3010362767a27a6f4e4cf6' was
gathered on 2021/08/02.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Matthew Scroggs 265 12520 3871 100.00
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
Matthew Scroggs 8633 69.0 3.1 7.71
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋🏼 @mscroggs & @iammix & @sigvaldm this is the review thread for the paper. All of our communications will happen here from now on.
Both reviewers have checklists at the top of this thread with the JOSS requirements 🔝 As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#3556
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use Whedon (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@Nikoleta-v3 ) if you have any questions/concerns 👍🏻 😄
Not yet done with the review, but my first impression is good. There are clear instructions on how to 1) contribute and 2) report issues, but not 3) seek support. One could perhaps append a statement in the "Using Symfem" section that in addition to the documentation, further support may be reached through e-mail, or the issue tracker, or whatever your preference is.
Not yet done with the review, but my first impression is good. There are clear instructions on how to 1) contribute and 2) report issues, but not 3) seek support. One could perhaps append a statement in the "Using Symfem" section that in addition to the documentation, further support may be reached through e-mail, or the issue tracker, or whatever your preference is.
Good idea, I've added a link to opening a support issue in the Using Symfem section
General comment: Although there is no lack of software which implements various finite elements, Symfem implements a rather vast set of element types, and does so using symbolic representations (using SymPy, at increased computational cost). This allows further symbolic manipulation (differentiation and so on) by the user. The software is well designed, using the general Ciarlet definition (CiarletElement
) of the finite element as the starting point for most (all?) element types, and allowing new types of elements to be implemented through inheritance (one file per element). This makes the software extensible and maintainable. The software is also well tested through continuous integration and parametrized tests. This enforces the maintainability and generality, and it does not appear that the tests are too tightly coupled to implementation. Some of the tests were skipped on my system. The API seems to be well complemented by docstrings, although the introductory documentation is perhaps somewhat terse. Although the software is still young, and has only a sole developer (except for a bot), it is my opinion that the work represents substantial scholarly effort, and can make it easier to experimenting with various elements during development also for other researchers. I will make more specific comments and requests below.
While trying out the software I was a little confused by create_reference()
at first, because it allowed specifying the coordinates of the vertices. What would distinguish a reference element from any other element then? The stiffness matrix example also used create_reference()
to create all elements in the mesh, specifying the coordinates through the second argument. One might also expect create_reference()
and create_element()
to returns the same kind of object, except that the former is a reference element, but in fact create_element()
returns an object with a higher level of abstraction, and it seems that create_reference()
creates the lower level of abstraction regardless of the vertices' position. One way in which the returned object from create_reference()
is truly a reference element and not just any element, is that it contains forward and inverse mapping functions to any element. I think I figured out how to use it, anyway, but perhaps I'm missing something. Would you mind enlightening me?
There's a curious oversight in the paper: There's no reference on the finite element method when the finite element method and the weak form is introduced.
@Nikoleta-v3: There are two statement-of-need items in the checklist. How is the one under "Documentation" different from the one under "Software paper"? Is it just that it should be clear from the readme/readthedocs as well what the software does?
Concerning the automated tests, on a clean conda environment the tests against Basix were skipped because Basix were not installed. This is reasonable, and I see that they pass on the continuous integration system. There were also five skipped tests in test_legendre.py
(see below). This is probably fine, but I'm curious why they are skipped?
test_legendre.py::test_legendre[prism-4] PASSED [ 76%]
test_legendre.py::test_legendre[pyramid-1] SKIPPED (<Skipped instance>) [ 78%]
test_legendre.py::test_legendre[pyramid-2] SKIPPED (<Skipped instance>) [ 80%]
test_legendre.py::test_legendre[pyramid-3] SKIPPED (<Skipped instance>) [ 82%]
test_legendre.py::test_legendre[pyramid-4] SKIPPED (<Skipped instance>) [ 84%]
test_legendre.py::test_orthogonal[interval] PASSED [ 86%]
test_legendre.py::test_orthogonal[triangle] PASSED [ 89%]
test_legendre.py::test_orthogonal[tetrahedron] PASSED [ 91%]
test_legendre.py::test_orthogonal[quadrilateral] PASSED [ 93%]
test_legendre.py::test_orthogonal[hexahedron] PASSED [ 95%]
test_legendre.py::test_orthogonal[prism] PASSED [ 97%]
test_legendre.py::test_orthogonal[pyramid] SKIPPED (<Skipped instance>) [100%]
:wave: @sigvaldm
@Nikoleta-v3: There are two statement-of-need items in the checklist. How is the one under "Documentation" different from the one under "Software paper"? Is it just that it should be clear from the readme/readthedocs as well what the software does?
Yup! The statement-of-need
should be clear in both the paper and in the package's documentation.
@sigvaldm Thanks for reviewing this. I'll work though your comments & questions and post replies here over the next couple of days
Sounds good, @mscroggs. All in all a very nice piece of software. Perhaps the main area of improvement in my opinion would be the documentation.
Each method and class is well documented with to-the-point docstrings, so this is good. The introductory part of the documentation is very brief, though, and doesn't even expose many of the categories of functionality present in the API. From the API I see there are vector operators, differential operators, mappings between reference elements and other elements, and whatnot. Since these aren't mentioned at all in examples (demos) or the introduction, one might in fact get the impression that the software does less than it actually does. I don't want to impose too many requirements on documentation style, since that is personal, but I would appreciate at least mentioning what exists somewhere either in the introduction, or in examples. The dot product is used in the code snippet on the matrix assembly demo, and one could for instance use the occasion to say that there's a whole range of such operations with a reference to the API documentation. Although e.g. vdot()
is used in the snippet it's easy to miss if you're just glossing over it. On a similar account, I have also peeked at your encyclopedia over at defelement.com, and I think it's an excellent complement to Symfem. Perhaps you could add a link to that as well, in case the user needs some background information?
The closest thing to a statement-of-need I could find in the documentation is this sentence: "Symfem is a symbolic finite element definition library, that can be used to symbolically evaluate the basis functions of a finite element space.". It's short, and doesn't quite say why the software is needed. Perhaps append a few more sentences after this one, or add some sentences to the beginning of the readthedocs?
Finally, as for example of usage, I liked the stiffness matrix demo, but you don't really need symbolic evaluation to assemble a stiffness matrix. Can you think of another example to add which benefits more from the fact that expressions are symbolic? Perhaps you know of something from DefElement you can adapt?
I think that's all my comments to begin with. Looking forward to the responses :)
While trying out the software I was a little confused by
create_reference()
at first, because it allowed specifying the coordinates of the vertices. What would distinguish a reference element from any other element then? The stiffness matrix example also usedcreate_reference()
to create all elements in the mesh, specifying the coordinates through the second argument. One might also expectcreate_reference()
andcreate_element()
to returns the same kind of object, except that the former is a reference element, but in factcreate_element()
returns an object with a higher level of abstraction, and it seems thatcreate_reference()
creates the lower level of abstraction regardless of the vertices' position. One way in which the returned object fromcreate_reference()
is truly a reference element and not just any element, is that it contains forward and inverse mapping functions to any element. I think I figured out how to use it, anyway, but perhaps I'm missing something. Would you mind enlightening me?
create_reference
creates a reference cell which contains functions to create the map to/from the default reference, geometry computation, etc. create_element
creates a finite element on a reference cell.
Being able to define the vertices of a reference cell is a little confusing: originally I was planning to allow the user to create finite elements on a cell with any vertices, but later decided that this was probably an overcomplication. It's very useful in multiple place to be able to create these cells and use their geometry functions though.
I'm currently updating some of the docstrings: I'll add some more detail about this.
There's a curious oversight in the paper: There's no reference on the finite element method when the finite element method and the weak form is introduced.
I've added a reference in https://github.com/mscroggs/symfem/commit/1398dfde9ea156d02856682de5edf33b93274c88
Concerning the automated tests, on a clean conda environment the tests against Basix were skipped because Basix were not installed. This is reasonable, and I see that they pass on the continuous integration system. There were also five skipped tests in
test_legendre.py
(see below). This is probably fine, but I'm curious why they are skipped?test_legendre.py::test_legendre[prism-4] PASSED [ 76%] test_legendre.py::test_legendre[pyramid-1] SKIPPED (<Skipped instance>) [ 78%] test_legendre.py::test_legendre[pyramid-2] SKIPPED (<Skipped instance>) [ 80%] test_legendre.py::test_legendre[pyramid-3] SKIPPED (<Skipped instance>) [ 82%] test_legendre.py::test_legendre[pyramid-4] SKIPPED (<Skipped instance>) [ 84%] test_legendre.py::test_orthogonal[interval] PASSED [ 86%] test_legendre.py::test_orthogonal[triangle] PASSED [ 89%] test_legendre.py::test_orthogonal[tetrahedron] PASSED [ 91%] test_legendre.py::test_orthogonal[quadrilateral] PASSED [ 93%] test_legendre.py::test_orthogonal[hexahedron] PASSED [ 95%] test_legendre.py::test_orthogonal[prism] PASSED [ 97%] test_legendre.py::test_orthogonal[pyramid] SKIPPED (<Skipped instance>) [100%]
I'm yet to implement Legendre polynomials on pyramid, so these are skipped for now (although perhaps xfail would be a better thing to do).
The software is well designed, using the general Ciarlet definition (
CiarletElement
) of the finite element as the starting point for most (all?) element types,
All except one (direct serendipity) so far are defined using the Ciarlet definition
I've added an extra demo, added more information to the README, and adjusted what Read the Docs says in mscroggs/symfem#118. (although the changes won't appear on Read the Docs until I next release a version)
@whedon generate pdf
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1145/1039813.1039820 is OK
- 10.7717/peerj-cs.103 is OK
- 10.11588/ans.2015.100.20553 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon generate pdf from branch update-docs @whedon check references from branch update-docs
Attempting PDF compilation from custom branch update-docs. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon check references from branch update-docs
Attempting to check references... from custom branch update-docs
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1145/1039813.1039820 is OK
- 10.7717/peerj-cs.103 is OK
- 10.11588/ans.2015.100.20553 is OK
- 10.1137/1.9780898719208 is OK
MISSING DOIs
- None
INVALID DOIs
- None
I think the new demos on the Lagrange and Nedelec elements are perfect examples of what this software can do, that other software can't. I got an AssertionError
when running the lagrange.py
, though, but I think I could understand the intention, and was able to make it pass. See if you're happy with the PR, or if not, make sure lagrange.py
doesn't fail. There was also another error, but I didn't dig into that:
[sigvald@sim demo]$ pytest -v .
==================================================================================================================== test session starts =====================================================================================================================
platform linux -- Python 3.9.5, pytest-6.2.1, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /home/sigvald/Codes/symfem
plugins: cov-2.11.1
collected 5 items
test_demos.py::test_demo[lagrange.py] FAILED [ 20%]
test_demos.py::test_demo[sim.py] PASSED [ 40%]
test_demos.py::test_demo[nedelec.py] PASSED [ 60%]
test_demos.py::test_demo[stiffness_matrix.py] PASSED [ 80%]
test_demos.py::test_demo[custom_element.py] FAILED
I've added a reference in mscroggs/symfem@1398dfd
Looks good. Once it's merged into the main branch I'd be happy to tick off the "References" and "Quality of writing" checkboxes.
I'm yet to implement Legendre polynomials on pyramid, so these are skipped for now (although perhaps xfail would be a better thing to do).
Now that I know the reason I think it's fine (you don't claim it to be implemented anywhere), and I don't have a strong opinion on whether to use skip
or xfail
. Using e.g. reason="not yet implemented"
would make it obvious why it is skipped, though.
I've added a reference in mscroggs/symfem@1398dfd
Looks good. Once it's merged into the main branch I'd be happy to tick off the "References" and "Quality of writing" checkboxes.
It's merged. I've also bumped the version number so that the Read the Docs documentation is now up to date
The documentation has been enhanced in several places. Especially the two extra demos are enlightening, as they demonstrate the capabilities unique to this software. I recommend that this submission be accepted (@Nikoleta-v3).
Fantastic. Thank you @sigvaldm for your review 😄 👍🏻
@mscroggs @Nikoleta-v3
just to inform you, by the end of this week I will finish my review too.
Thank you for the update @iammix 😄
:wave: @iammix, please update us on how your review is going (this is an automated reminder).
and apologies for whedon 😅
@mscroggs Great work! The structure of the code is clean and clear. I like that you added the corresponding papers in docstrings of each element implementation. Regarding the tests, it is good that you used xfail for the not-implemented features. It advises beforehand the user what will follow. I would recommend you to numerate the equation in the paper. After that, the paper is ready for acceptance from my side.
@Nikoleta-v3 , after the aforementioned change, the submission can be accepted.🎉
@whedon generate pdf from branch joss-update
Attempting PDF compilation from custom branch joss-update. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@iammix Thanks for reviewing. I've added an equation number now
I don't know if can throw in my 2 cents here: Discussion about advantages of difference ->in<- methodology see e.g. sagemath usage in https://arxiv.org/pdf/1806.00031.pdf. Also refs for individual geometries are not always clear (e.g. prism nedelec1) https://defelement.com/elements/nedelec1.html.
Submitting author: @mscroggs (Matthew Scroggs) Repository: https://github.com/mscroggs/symfem Version: v2021.8.2 Editor: @Nikoleta-v3 Reviewer: @iammix, @sigvaldm Archive: 10.5281/zenodo.5242891
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@iammix & @sigvaldm, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Nikoleta-v3 know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @iammix
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @sigvaldm
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper