barbagroup / bempp_exafmm_paper

Manuscript repository for our research paper, including reproducibility packages for all results, and latex source files.
6 stars 2 forks source link

Round 4, Reviewer 7 (Dec. 2022) #22

Open labarba opened 10 months ago

labarba commented 10 months ago

Reviewer #7 (Remarks to the Author):

This manuscript provides a pipeline and its software implementation to use Bempp-Exafmm to conduct virus-scale electrostatics simulations. After reading though the manuscript and a few related references, the reviewer had many concerns about the product, particularly its novelty, numerical accuracy, memory, and practical usage as listed below. Based on these concerns, the reviewer rejects the publication of the manuscript with nature computational science.

Novelty: the work described is mostly a reassembly or insignificant increment of the authors’ previous work [11][15][39][53][60]. The comparisons between direct [26] and derivative [27] boundary integral methods, or the interior and exterior forms, have been clearly stated in many previous work [8][9][29] such that the derivative method has obvious advantage in convergence rate and the exterior form is faster. There is no point to report test results or provide the options in code to let the user to choose different formulations when there is obviously a winner already (e.g. figure 1-2 and table 4).

Numerical Accuracy: This is the main concern. The only case with analytical solution the authors reported at the consideration of accuracy is in figure 2. The fact that the authors use the quantity of solvation energy, which is a number or a weighted average of the reaction potential at the charge location to measure the convergence of the numerical algorithm is not correct. They should instead use the norm of surface potential error. In fact, from Steinbach’s argument (978-0-387-31312-2), when Galerkin boundary integral with singularity removal is used, the solution can be of O(h), which is O(1/(N^2)) as opposed to the O(1/N) reported in this manuscript. Comparison in Table 2 also raises the concern, the difference seems large and tests should be done between the proposed the method and the most accurate method (maybe the MIBPB) with repeatedly refined meshes.

Memory: The memory usage reported in table 6 is surprisingly large compared with previously reported boundary integral PB solvers [9][29]. The reviewer questioned that the authors might use storage extensively in trade of efficiency.

Practical Usage: The python code as wrappers should provide the users from the greater computational biophysics’ community convenient Interfaces to the potential biological application of the PB solver, rather than showing cases how fast the solver can be or how large of the target protein the solver can handle. The authors have access to very advanced supercomputers while most potential users do not have. The wrappers developed by APBS are good examples.

labarba commented 10 months ago

On Dec 9, 2022, at 8:21 PM, Tingyu Wang wrote:

The reviewer again misunderstood the novelty. To me, the reviewers in this field do not need or value the power of interactive computing presented here, and the capability of performing virus-scale simulations was totally ignored.

labarba commented 10 months ago

On Dec 20, 2022, at 12:17 PM, Lorena Barba wrote:

Reviewer 7 complains about using the quantity of salvation error—a weighted average of reaction potential at the charge locations. He said we should use the norm of the surface potential error… – What difference does it make? They are both some sum over the surface, and sure they will converge at a different rate, but the point is that there is convergence. – Are other papers using the norm of surface potential error for convergence analysis? Why and how? – Is it worth it to do this? Will it add any information? (I doubt it.)

I guess we could try to just do that calculation on the sphere case. There’s an issue of using the same surface points to evaluate error on consecutive meshes, but maybe it’s not difficult.

The question of novelty … Reviewer 7 again does not understand that we are not claiming novelty on model, method, or algorithm—it is about the research platform.

Memory usage … yes, there is a memory cost to the overall approach: so what? This would not be a reason to reject the paper. We can just acknowledge the “limitation” and move on.

Practical usage … the user interface is Jupyter: a very popular and modern environment for computational work. What does he want? An old-style web server? Just his opinion, I guess, and not a reason to reject our paper.

labarba commented 10 months ago

On Dec 22, 2022, at 5:23 AM, Tingyu Wang wrote:

I agree with your comments here. We have clearly demonstrated the use cases for both computational scientists and for biophysics audiences. Regarding high memory usage, that is simply because of the Galerkin methods and the better accuracy in FMM (compared to other tree-code BEM). Besides that, we have already explained that our method shines in applications that involve large structures or require higher accuracy, and have acknowledged the "limitation" of being slower than grid-based methods for smaller structures in the discussion section.

labarba commented 10 months ago

On Jan 4, 2023, at 8:52 AM, Betcke, Timo wrote:

Some comments here with regards to convergence:

The reviewer claims that O(h) = O(N^{-2}). This is obviously wrong. We have the relationship that the total number N of dofs grows quadratically as h gets smaller (because we have a surface mesh), so h^{-2} \sim N, or on other words N^{-1} \sim h^2. We observe convergence of order N^{-1} in the paper for the solvation energy, which is equivalent to an observed convergence of O(h^2), or in other words quadratic convergence.

I just went through the convergence results in Steinbach’s book and cannot find anything that supports the reviewers remarks of the solution being O(h) convergent. The analysis by Steinbach does not fully apply in our case. He gave an analysis for the Dirichlet problem using piecewise constant basis functions. Here, the rate of convergence on the boundary is O(h^{3/2}) (superlinear convergent). The resulting H^1 norm error in the interior than convergences with the same rate (meaning L^2 error will convergen faster, derivative drags down convergence rate). But what we require for us is piecewise linear basis functions and pointwise error estimates of the function values (not derivatives) in the interior.

Generally, for an optimal Galerkin method, if we use linear basis functions we expect the Dirichlet error to convergence quadratically, which is what we observe and which is not surprising (and we haven’t made theoretical claims about it).

It is frustrating that we are rejected again based on a reviewer assertion that for the first part is simply wrong (claiming that h \sim N^{-2}) and in the second part uses argumentsthat I simply cannot find in the cited book.

I looked through our preprint again and I find the use of the error measure of solvation energy convergence fully appropriate. If we want to have precedence, this is called goal-oriented error estimation, meaning we do not want to measure the convergence of the solution itself but a functional of the solution that is application relevant (see e.g. https://users.oden.utexas.edu/~oden/Dr._Oden_Reprints/2001-004.goal-oriented_error_CMWA.pdf, where the goal functional is defined as dual space element that evaluates the solution).

At the end of the day, we were again stuck with a reviewer who has no numerical analysis competence but was presumably chosen as somebody who can assess us in this regard as well. This is frustrating and just another proof of what a bad job the editor was doing with our paper.