openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
703 stars 36 forks source link

[REVIEW]: FEM_2D: A Rust Package for 2D Finite Element Method Computations with Extensive Support for *hp*-refinement #4172

Closed whedon closed 1 year ago

whedon commented 2 years ago

Submitting author: !--author-handle-->@jeremiah-corrado<!--end-author-handle-- (Jeremiah Corrado) Repository: https://github.com/jeremiah-corrado/fem_2d Branch with paper.md (empty if default branch): Version: 0.2.2 Editor: !--editor-->@jedbrown<!--end-editor-- Reviewers: @jeremylt, @YohannDudouit Archive: 10.5281/zenodo.7849878

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/1f6d4416b383d6fe96b4008334ea1bef"><img src="https://joss.theoj.org/papers/1f6d4416b383d6fe96b4008334ea1bef/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/1f6d4416b383d6fe96b4008334ea1bef/status.svg)](https://joss.theoj.org/papers/1f6d4416b383d6fe96b4008334ea1bef)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@jeremylt & @YohannDudouit, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jedbrown know.

✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨

whedon commented 2 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @jeremylt, @YohannDudouit it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 2 years ago

Wordcount for paper.md is 1173

whedon commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.36227/techrxiv.16695163.v1 is OK
- 10.36227/techrxiv.14807895.v1 is OK
- 10.1145/1089014.1089019 is OK
- 10.1007/978-1-4612-1986-6_8 is OK
- 10.1515/jnma-2021-0081 is OK

MISSING DOIs

- None

INVALID DOIs

- None
whedon commented 2 years ago
Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.21 s (163.7 files/s, 56075.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Rust                            23            797            731           5675
JSON                             6              3              0           4281
Markdown                         3             93              0            266
TeX                              1              9              0             89
TOML                             1              2              0             23
YAML                             1              1              4             18
-------------------------------------------------------------------------------
SUM:                            35            905            735          10352
-------------------------------------------------------------------------------

Statistical information for the repository '4e5b9574b6f1c023162ffc94' was
gathered on 2022/02/16.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Jeremiah                         2          1664              0           24.67
jeremiah-corrado                10          1708           3372           75.33

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
whedon commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jedbrown commented 2 years ago

Hi @jeremylt @YohannDudouit :wave: Welcome to JOSS and thanks for agreeing to review! The comments from @whedon above outline the review process, which takes place in this thread (possibly with issues filed in the FEM_2D repository). I'll be watching this thread if you have any questions.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention this issue so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within a month or so. Please let me know if you require some more time. We can also use Whedon (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@jedbrown) if you have any questions/concerns.

jeremylt commented 2 years ago

@jedbrown, could you resend the invite? I waited too long before clicking the link.

jedbrown commented 2 years ago

Whedon retired young so we now ask @editorialbot generate my checklist. You don't need an invite.

jeremylt commented 2 years ago

Review checklist for @jeremylt

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

YohannDudouit commented 2 years ago

Review checklist for @YohannDudouit

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

YohannDudouit commented 2 years ago

I opened the issue https://github.com/jeremiah-corrado/fem_2d/issues/2 containing a number of observations and suggestions.

jeremylt commented 2 years ago

Since @YohannDudouit made a pretty comprehensive review on the overall structure and I agree with many of his points, I focused on creating a few issues highlighting some more Rust specific technical aspects:

jeremiah-corrado commented 2 years ago

Thank you both for the detailed feedback! It's really nice to get an outside perspective on the design and documentation.

I am currently in prep-mode for my thesis defense, but I should be able to start addressing these issues within a couple of weeks.

In the meantime, I have a few questions and notes about the suggestions you made:

@YohannDudouit:

@jeremylt:

Thanks again for all the excellent feedback!

jedbrown commented 2 years ago

I'll just respond to the question about scope. There is a limitless number of extensions and improvements for any nontrivial software so you have to draw the line somewhere. I think you want to try hard to make the architecture ready to receive contributions. If the new thing looks like implementing a trait or adding an enum variant, it's tractable to many more people than if it requires refactoring deeper assumptions. Such modularity can easily go overboard as well, but if you think about desired extensions and put yourself in the shoes of someone who might be interested in implementing, you should be able to identify what is "ready" and what is "theoretically, one could refactor the code to add".

jeremylt commented 2 years ago
  • All of the internal calls to .unwrap() are made when I know that a function call will not return an Err. For example, in the global_h_refinement() method on Mesh, I call unwrap() on elem_is_h_refineable() because it only returns an Err when the provided elem_id does not exist. In this case I know that it does exist because I am iterating over the list of elems. Therefore, I can call unwrap rather than propagating the error using the "?" operator. Would it be helpful if I make a case for the validity of calling unwrap in the documentation for these methods?

That all makes sense for your codebase at this moment in time - at the very least I'd recommend you document for your future self why those unwraps are valid. I still wouldn't do this though, personally. You are tightly coupling these pieces of code with implicit assumptions - specifically you are assuming that there is only one way for this function call to fail and that the constructor and/or specific usage for this object prevents this one failure mode. You are now responsible for maintaining these (undocumented!) assumptions through any changes to the function or the object. Why not just let the error handler keep track of this for you and let it give you a nice error message if future design decisions push you into an error case you hadn't originally anticipated? Unless you identify specific performance bottlenecks, I personally find it easier for future development to make use of the ergonomic language features and let the compiler optimize out the parts you don't need. I tend to assume that the error handling is free/nearly free until I profile that it is a bottleneck. Maybe I'm just a bit paranoid/defensive in my coding, but I tend to make everything but the simplest getters or helper functions return a Result.

Ultimately your call on what conventions you adopt here. I think it saves a ton of future headache to just use Results and ? everywhere, but I think you should at least note why these unwraps shouldn't ever fail in case future development changes those assumptions.

jeremiah-corrado commented 2 years ago

Hello all,

Just a quick update to summarize some recent progress:

Thanks for all the feedback and discussion so far!

jedbrown commented 2 years ago

That's great to hear. Thanks for the status update.

jedbrown commented 2 years ago

@editorialbot generate pdf

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jeremylt commented 2 years ago

My issues have been addressed.

One minor point - I suspect 'galerking sampling' on page 6, footnote 1 should be 'Galerkin Sampling'

jedbrown commented 2 years ago

@YohannDudouit Could you give us an update as to whether your questions have been sufficiently addressed and the remaining items on your checklist?

YohannDudouit commented 2 years ago

@jedbrown We may want to discuss specific points to see if they meet the JOSS criteria, in particular "Substantial scholarly effort", "Functionality", "Performance", "Functionality documentation", "A statement of need", and "State of the field".

If I find this work overall interesting, I have some concerns though. The purpose of this library and the targeted audience is not very clear to me. The paper introduction claims "benefits" of the RBS approach without clearly stating what these are. The "Statement of need" opens with "Efficiently computing FEM solutions" and mentions the library Deal.II as a point of comparison, however it would be good to clarify what "efficient" means in this context and what are the aspects of Deal.II on which to compare (performance?). The main advantage over Deal.II seems to be "The succinctness of the continuity enforcement algorithm removes much of the difficulty of implementing new features. This is a major barrier to entry for contributing to larger and more complex packages."; I don't know Deal.II very well, but it seems suspicious to me that implementing new features usually requires a good understanding of continuity enforcement algorithms, for instance, in the MFEM library users very rarely have to be aware of these algorithms to implement new features. After investigating the implementation I also have strong doubts about the performance and therefore the capacity to "Efficiently comput[e] FEM solutions". I also find the documentation minimalist, the documentation of traits, struct, and functions rarely exceed a few lines. The main page is also outdated, the ShapeFn and Integral traits do not exist any more, and give an erroneous impression of generality of the library. I also find multiple names particularly confusing for users:

This brings me to some of the issues I have been trying to bring attention here. One of the main advantages of the Rust language is to allow zero-cost high-level abstraction, therefore having interfaces close to the mathematical concepts while hiding implementation details. I think the fem_2D library does not do a good job at separating abstraction and implementation, specifically implementation details often pollutes the abstraction in my opinion, resulting in interfaces that mix user interfaces and implementation details. I found it difficult while exploring the library to identify what was meant to be user interface and what was more internal. In particular, it seems to me that the claim that the library can easily be extended to support all sorts of finite elements is exaggerated in the current state. Currently, only H(curl) is supported, and specificities about such finite elements pollutes the generality of the finite element abstractions, and mostly discourage external contributions due to large refactors that extending the library would require (even for simpler finite elements such as H1), see this example. Finally, I find it hard to answer the question: "Why would I choose to contribute to fem_2d rather than another open-source finite element library?".

jedbrown commented 2 years ago

Thanks @YohannDudouit for your comprehensive review.

@jeremiah-corrado how are you doing? From the perspective of JOSS, we are looking for accuracy. So if you don't want to make major changes to the library, you can adjust the abstract to make clear that it's a library for linear $H(\text{curl})$ problems or whatever is an accurate statement of what is currently supported. Of course software can hypothetically be extended to do lots of things, but if there isn't an interface with more than one implementation showing how it works, it's unlikely to attract developers to actually do that work (and exaggerated claims are likely to turn off people who might otherwise be capable of doing such work). So please just let us know what you plan to do and don't hesitate to ask questions.

As an aside, I notice that the quadrature interface handles only weights, leaving the points to the user to index manually. This Fn(usize, usize) -> f64 is an unusual interface in mathematical software; usually the quadrature routine would be aware of the points and thus the integrand would be Fn(f64, f64) -> f64, which is easier to use and performs just as well.

jedbrown commented 2 years ago

Hi @jeremiah-corrado :wave:. I just wanted to check in about your plans. To be clear, you don't need to make major changes to the code in order to publish in JOSS, but the paper and docs need to accurately state what the software is now, not what you imagine it can become. Of course you're welcome to act on the review in the form of major code changes.

jeremiah-corrado commented 2 years ago

Hi @jedbrown, sorry for the delay in responding to your earlier message!

I plan to rewrite parts of the paper and introductory documentation to reflect the H(curl) selectiveness of the library. That should happen this coming weekend.

As an aside, I notice that the quadrature interface handles only weights, leaving the points to the user to index manually. This Fn(usize, usize) -> f64 is an unusual interface in mathematical software; usually the quadrature routine would be aware of the points and thus the integrand would be Fn(f64, f64) -> f64, which is easier to use and performs just as well.

This was done for performance reasons, as it allows the precomputed basis-function values to be repeatedly called from memory rather than being recomputed. Yohann had a similar concern, which I responded to in more detail here: https://github.com/jeremiah-corrado/fem_2d/issues/2#issuecomment-1126263955

jeremiah-corrado commented 1 year ago

Hi @jedbrown, I made some updates to the paper a few weeks ago that reduced the claimed scope of the library to the H(curl) functionality that is currently supported. I also make it clear that extending the library into other domains would be more intensive than creating custom implementations of the basis-function and integral traits. I believe these changes should address the remaining issues in the review.

Please let me know what the next steps are. Thank you!

jedbrown commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jedbrown commented 1 year ago

@YohannDudouit How do you feel about these revisions? Would this be sufficient to complete your review?

danielskatz commented 1 year ago

👋 @YohannDudouit - can we get your response to @jedbrown's question?

YohannDudouit commented 1 year ago

Hi @jedbrown, @danielskatz, and @jeremiah-corrado , I apologize for the slow answer. My remaining main issue is with the "Statement of need", I think the use of "efficient"/"efficiently" is confusing and misleading, and I am also not comfortable with the claim of "distinct advantage over Deal.II". If I understand correctly the statement of need, the use of "efficient" means "ease of implementation" and also relates to the ability to perform hp-adaptivity. The ability to perform hp-adaptivity is not exclusive to FEM_2D, and in that regard the RBS approach uses more resources (in particular memory) than a "constrained-node refinement" approach (used by Deal.II for instance). Unless some sort of performance comparison between FEM_2D and Deal.II is provided I would suggest to avoid the claims over Deal.II, and just mention Deal.II as one of the FEM library using a "constrained-node refinement" approach in the introduction (maybe line 33?). Regarding the "ease of implementation", I have a mixed feeling about these claims, if I understand how this can be an important point for the "efficiency" of developers, I think this is misleading to categorize this under "Efficiently computing FEM solutions" (line 48). Moreover, if the RBS approach allows easier implementation of continuity conditions when compared to a "constrained-node refinement" approach, as mentioned in previous conversations, the public API is relatively polluted by implementation details; for instance the quadrature interface uses a public API close to the implementation for "performance reasons", when these implementation details could be hidden from the user by providing a public API closer to the mathematics and keeping these optimization details in a private API. Therefore, I would suggest to clarify the different pros and cons of the RBS approach over a "constrained-node refinement" approach, and explain how this can benefit FEM solutions that require hp-adaptivity. In particular, the end of the introduction (line 33-41) explains "what" is different in the RBS from a "constrained-node refinement" approach, but leaves the reader wondering "so what". In particular, I would suggest clarifying why one should care about these RBS features promoted in the introduction, i.e. (1) why should one care to have a "forest" of elements instead of a "flat" collection of elements, and (2) what is the use of integrating functions defined on different layers of element trees (mixed mesh resolution FEM?). @jeremiah-corrado Please feel free to discuss these points here before modifying the paper.

jeremiah-corrado commented 1 year ago

Hi @YohannDudouit, thanks for the additional feedback. It sounds like we are mostly on the same page w.r.t what kinds of claims the paper should be making; however, some of my wording and explanation probably need to be improved.

TL;DR version of the following: I think the way you were interpreting the paper conflated some background info about per-dof-efficiency with the claims I was making about the benefits of the RBS method (over other efficient methods). There are several places — particularly in the statement of need — where the wording can be improved to alleviate this confusion. Let me know what you think about the following clarifications:


I think the use of "efficient"/"efficiently" is confusing and misleading

In the "Statement of Need" section, I am using the word "efficiently" to mean something very specific: iteratively converging on a solution at an exponential rate w.r.t. the number of degrees of freedom.

The discussion of efficiency is strictly there to motivate the use of hp-refinement in general (as opposed to using p-refinement alone). I am not intending to make any claims that the library is somehow more efficient than other methodologies or libraries. I am saying that it provides convergence rates that are in the same class as other FEM libraries like Deal.II. Namely, exponential convergence as opposed to algebraic.

The goal was to convey this idea without digging into the math of theoretical convergence rates, but perhaps I should include a more rigorous definition of "efficient/efficiently" in this section? Or use a different word all together?

I am also not comfortable with the claim of "distinct advantage over Deal.II".

The distinct advantage over Deal.II is that the Mesh data structure in the library imposes fewer constraints on h-refinement. Namely, anisotropic h-refinements are allowed for Continuous Galerkin problems, and there is no limit on the number of hanging nodes along each edge.

Deal.II, by contrast, only allows anisotropic refinement of hexahedral elements for Discontinuous-Galerkin problems.And even then, it limits refinements to one hanging-node per edge. See this example from their documentation: https://www.dealii.org/developer/doxygen/deal.II/step_30.html.

I can change the phrasing from "distinct advantage" to "an advantage"? Or perhaps state that this is an advantage over the constrained-nodes approach in general instead of discussing Deal.II?

The ability to perform hp-adaptivity is not exclusive to FEM_2D, and in that regard the RBS approach uses more resources (in particular memory) than a "constrained-node refinement" approach (used by Deal.II for instance). Unless some sort of performance comparison between FEM_2D and Deal.II is provided I would suggest to avoid the claims over Deal.II, and just mention Deal.II as one of the FEM library using a "constrained-node refinement" approach in the introduction (maybe line 33?)

The paper doesn't make any performance claims over Deal.II. It certainly does not claim that fem_2d is the only library that implements hp-adaptivity. I'm not sure where you are seeing that?

I'd be happy to include a few sentences about the tradeoffs between constrained-nodes and RBS as it pertains to: memory consumption, implementation complexity (lines of code, not big-O), etc.

Regarding the "ease of implementation", I have a mixed feeling about these claims, if I understand how this can be an important point for the "efficiency" of developers, I think this is misleading to categorize this under "Efficiently computing FEM solutions" (line 48)

Again, line 48 is only justifying the usefulness of hp-refinement as such. This is intended to establish background information, particularly for the reader who is wondering why someone might be concerned with hp-refinement at all.

In other words, the logical progression of this section was intended to be something like:

mentioned in previous conversations, the public API is relatively polluted by implementation details; for instance the quadrature interface uses a public API close to the implementation for "performance reasons", when these implementation details could be hidden from the user by providing a public API closer to the mathematics and keeping these optimization details in a private API.

I agree that the portion of the API that pertains to integration has a lot of implementation details. However, in a response to one of your previous messages (the first few paragraphs of: https://github.com/jeremiah-corrado/fem_2d/issues/2#issuecomment-1126263955), I was really trying to convey the idea that fem_2d's primary contribution is it's Mesh data structure and hp-refinement API. The portions of the code related to integration and Maxwell's Equation are really there to prove the validity of the method. The structure of the associated research is very similar in the sense that RBS as a methodology is the primary focus, and Maxwell's Eigenvalue Problem is used as an avenue to experiment with it.

I also think the structure of this paper conveys the same idea. The "hp-Refinement API" section is the longest, and is the focus of the introduction, conclusion and statement of need. The "Problem Formulation and Solution" section, by contrast, is much shorter and states explicitly that it is meant as an example of how the hp-refinement API might be employed. See line 124.

Please let me know if you disagree that this is the central focus of the paper/documentation, and I can try to make it more clear.

In particular, the end of the introduction (line 33-41) explains "what" is different in the RBS from a "constrained-node refinement" approach, but leaves the reader wondering "so what". In particular, I would suggest clarifying why one should care about these RBS features promoted in the introduction, i.e. (1) why should one care to have a "forest" of elements instead of a "flat" collection of elements, and (2) what is the use of integrating functions defined on different layers of element trees (mixed mesh resolution FEM?).

I agree, this leaves the reader hanging a bit. I can add some more detail/justification to that paragraph.

YohannDudouit commented 1 year ago

Hi @jeremiah-corrado ,

Hi @YohannDudouit, thanks for the additional feedback. It sounds like we are mostly on the same page w.r.t what kinds of claims the paper should be making; however, some of my wording and explanation probably need to be improved.

TL;DR version of the following: I think the way you were interpreting the paper conflated some background info about per-dof-efficiency with the claims I was making about the benefits of the RBS method (over other efficient methods). There are several places — particularly in the statement of need — where the wording can be improved to alleviate this confusion. Let me know what you think about the following clarifications:

I think the use of "efficient"/"efficiently" is confusing and misleading

In the "Statement of Need" section, I am using the word "efficiently" to mean something very specific: iteratively converging on a solution at an exponential rate w.r.t. the number of degrees of freedom.

If this is what you mean, then this is what you should write. I would also suggest briefly describing what is a "hard" problem, and introducing hp-adaptivity as a solution to that problem. However, correct me if I'm wrong, but how to apply hp-adaptivity to recover exponential rate of convergence while minimizing dofs is an open problem, and we only know how to do it for specific cases; i.e. we do not know how to apply hp-adaptvity a priori to any problem that produces a singular solution.

The discussion of efficiency is strictly there to motivate the use of hp-refinement in general (as opposed to using p-refinement alone). I am not intending to make any claims that the library is somehow more efficient than other methodologies or libraries. I am saying that it provides convergence rates that are in the same class as other FEM libraries like Deal.II. Namely, exponential convergence as opposed to algebraic.

The goal was to convey this idea without digging into the math of theoretical convergence rates, but perhaps I should include a more rigorous definition of "efficient/efficiently" in this section? Or use a different word all together?

I am also not comfortable with the claim of "distinct advantage over Deal.II".

The distinct advantage over Deal.II is that the Mesh data structure in the library imposes fewer constraints on h-refinement. Namely, anisotropic h-refinements are allowed for Continuous Galerkin problems, and there is no limit on the number of hanging nodes along each edge.

Deal.II, by contrast, only allows anisotropic refinement of hexahedral elements for Discontinuous-Galerkin problems.And even then, it limits refinements to one hanging-node per edge. See this example from their documentation: https://www.dealii.org/developer/doxygen/deal.II/step_30.html.

I can change the phrasing from "distinct advantage" to "an advantage"? Or perhaps state that this is an advantage over the constrained-nodes approach in general instead of discussing Deal.II?

Now that the comparison is clear, I think it is inappropriate. The introduction gives the impression that RBS is being compared with a "constrained node approach". However, the limitation of Deal.II is specific to this library and not to the "constrained node refinement" approach. For instance, MFEM does not have this limitation despite using a "constrained node refinement" approach, making this comparison with Deal.II largely irrelevant.

The ability to perform hp-adaptivity is not exclusive to FEM_2D, and in that regard the RBS approach uses more resources (in particular memory) than a "constrained-node refinement" approach (used by Deal.II for instance). Unless some sort of performance comparison between FEM_2D and Deal.II is provided I would suggest to avoid the claims over Deal.II, and just mention Deal.II as one of the FEM library using a "constrained-node refinement" approach in the introduction (maybe line 33?)

The paper doesn't make any performance claims over Deal.II. It certainly does not claim that fem_2d is the only library that implements hp-adaptivity. I'm not sure where you are seeing that?

I'd be happy to include a few sentences about the tradeoffs between constrained-nodes and RBS as it pertains to: memory consumption, implementation complexity (lines of code, not big-O), etc.

This may be helpful, however I don't think lines of code is relevant, brief abstract algorithms could be interesting.

Regarding the "ease of implementation", I have a mixed feeling about these claims, if I understand how this can be an important point for the "efficiency" of developers, I think this is misleading to categorize this under "Efficiently computing FEM solutions" (line 48)

Again, line 48 is only justifying the usefulness of hp-refinement as such. This is intended to establish background information, particularly for the reader who is wondering why someone might be concerned with hp-refinement at all.

In other words, the logical progression of this section was intended to be something like:

  • Efficiently solving FEM problems is important
  • For hard problems, this requires h and p refinement
  • fem_2d is a library that implements hp-refinement

    • In fact, its primary intelectual contribution is its hp-refinement API
  • fem_2d might be of particular interest (over some other libraries that have hp-adaptivity) because RBS also gives it other interesting features like:

    • anisotropic h-refinements
    • n-irregularity
    • a succinct and easy to follow implementation of the Mesh data structure and associated _hp_refinement code

mentioned in previous conversations, the public API is relatively polluted by implementation details; for instance the quadrature interface uses a public API close to the implementation for "performance reasons", when these implementation details could be hidden from the user by providing a public API closer to the mathematics and keeping these optimization details in a private API.

I agree that the portion of the API that pertains to integration has a lot of implementation details. However, in a response to one of your previous messages (the first few paragraphs of: jeremiah-corrado/fem_2d#2 (comment)), I was really trying to convey the idea that fem_2d's primary contribution is it's Mesh data structure and hp-refinement API. The portions of the code related to integration and Maxwell's Equation are really there to prove the validity of the method. The structure of the associated research is very similar in the sense that RBS as a methodology is the primary focus, and Maxwell's Eigenvalue Problem is used as an avenue to experiment with it.

I also think the structure of this paper conveys the same idea. The "hp-Refinement API" section is the longest, and is the focus of the introduction, conclusion and statement of need. The "Problem Formulation and Solution" section, by contrast, is much shorter and states explicitly that it is meant as an example of how the hp-refinement API might be employed. See line 124.

I don't disagree, but I don't see what is special about this API, it's mostly the same API as any other FEM library supporting hp-adaptivity.

Please let me know if you disagree that this is the central focus of the paper/documentation, and I can try to make it more clear.

In particular, the end of the introduction (line 33-41) explains "what" is different in the RBS from a "constrained-node refinement" approach, but leaves the reader wondering "so what". In particular, I would suggest clarifying why one should care about these RBS features promoted in the introduction, i.e. (1) why should one care to have a "forest" of elements instead of a "flat" collection of elements, and (2) what is the use of integrating functions defined on different layers of element trees (mixed mesh resolution FEM?).

I agree, this leaves the reader hanging a bit. I can add some more detail/justification to that paragraph.

Reading your answer, this seems to be the core contribution in comparison to a "constrained dof" approach.

jedbrown commented 1 year ago

Thanks for the discussion here. I just want to note that "novelty" per se is not a requirement for JOSS. You have a package written in Rust that implements an hp-adaptive FE method. It doesn't need to have unique algorithmic capability. It does need to compare to related software, and that comparison needs to accurately depict those other packages. If there are kinds of tasks that are simpler or safer with your package, you might mention those. For comparison to MFEM, @YohannDudouit could perhaps mention some common types of user errors and you could assess whether those are possible in FEM_2D. But the bottom line is that JOSS recognizes value in diversity so don't try to force a "better than" narrative.

BTW, you may want to mention Fenris, which is a finite element (not hp-adaptive) package written in Rust.

danielskatz commented 1 year ago

👋 @jedbrown - It looks like it's been about a month here with no activity that I can see. What are the next steps to continue the processing of this submission?

jedbrown commented 1 year ago

@jeremiah-corrado Will you be able to make revisions based on the comments above, which I think are mostly about the comparison with related work. (Of course you can choose to make code changes too, but that shouldn't be necessary for acceptance.)

@YohannDudouit Could you please revisit your checklist and consolidate what you think remains as necessary changes for acceptance, acknowledging that JOSS's criteria does not insist that the capability be "better" than existing hp-adaptive packages like Deal.II and MFEM.

YohannDudouit commented 1 year ago

@jedbrown I am mostly okay with the current state. I do not agree that algorithms are the main reason for code complexity, I think the reality is far more complex; my experience is that the number of lines of code depends much more on design choices, programming experience of the developers, code architecture, than the actual algorithms being implemented. For instance the hp-capability in MFEM is currently limited by the serial/parallel design, and has little to do with the hp-algorithms as far as I know. Anyway, I am not requesting to prove the superiority of FEM_2D over Deal.II and MFEM, but I find the claim over Deal.II somehow inappropriate, difficult to grasp for the reader, and not very relevant; if you think this is fine then we can move on.

jeremiah-corrado commented 1 year ago

@jedbrown Yeah, sorry for the delay, I'll be making changes this weekend. I think I'll remove any mention of other FEM libraries and simply explain what fem_2d has to contribute.

jedbrown commented 1 year ago

Okay, thanks. Note that you still have to discuss related work (so please keep reference to hp-adaptive FE libraries), you just don't have to claim greater capability. Just try to give the reader an accurate picture.

jeremiah-corrado commented 1 year ago

Hello @YohannDudouit, I made some updates to the paper that I believe will address your concerns: https://github.com/jeremiah-corrado/fem_2d/commit/617808b932eceb7b382dffb157a7b63b4834f1cb. Please let me know what you think. Thanks!

YohannDudouit commented 1 year ago

@jeremiah-corrado I think the manuscript is very close (besides the remark I did on your commit). Though, I don't think you discuss related work as requested by @jedbrown.

jedbrown commented 1 year ago

@jeremiah-corrado :wave: Just checking in on this. I also think it's close. Please let us know when you think it's ready for a (hopefully last) look.

jeremiah-corrado commented 1 year ago

Hi @jedbrown and @YohannDudouit, I've made some updates with a brief related-work paragraph in the Statement of Need section: https://github.com/jeremiah-corrado/fem_2d/commit/dcc6c2116a07b0b16848d9c7fc2bf4c895d12c19. Please let me know if this addresses your concerns. Thanks!

jedbrown commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jeremiah-corrado commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

YohannDudouit commented 1 year ago

@jeremiah-corrado Thank you for your patience and devotion to improve your code and publication. I am very satisfied with the current state of your publication, I think it highlights clearly the strengths of your RBS approach. I hope to see your library grow and extend its functionalities to 3D and more PDE problems.

@jedbrown As stated above I am fully satisfied with the article, and all my concerns have been addressed.

jedbrown commented 1 year ago

Thanks, @YohannDudouit. Could you tick the remaining boxes in your checklist?

@jeremiah-corrado Thanks for your work. Just some copy edits and then I'll ask you to archive.

Paper

Code

jeremiah-corrado commented 1 year ago

Hi @jedbrown, thanks for the feedback and copy edits! Your PR was merged, and I believe I've resolved most of your concerns from above in: https://github.com/jeremiah-corrado/fem_2d/commit/ee15513aae873c566e2686927030960f2538fbea

I see that running the tests modifies files that are committed to the repository. Visualizing the two files, the domains and solutions are very different.

Those must have been from an old test run or something. I decided to gitignore the contents of that directory.

Perhaps you'd like to newtype the expansion order so that ranges are validated at that time rather than passing u8/i8 and needing to validate within a mutation interface?

I'm not sure what you mean by this.