Closed ranocha closed 1 month ago
Thank you for the review! I will address these points individually below.
The performance claims "ProbNum (Wenger et al., 2021) is a Python package that provides probabilistic numerical solvers for ODEs [...] with its reliance on Python and numpy, it is also generally less performant." and "ProbDiffEq provides ODE solvers with similar performance as those implemented in ProbNumDiffEq.jl." are not backed up by data in the paper itself.
To point you to the evidence that lead to this vague statement: ProbNum is implemented in NumPy which severly limits its performance, and the implementation is focused on generality rather than performance (disclaimer: I am one of the early main contributors to that software project). ProbDiffEq contains benchmarks such as this one which shows a similar comparison to diffrax as we show compared to DifferentialEquations.jl; I therefore thought that "similar perfromance" might be sufficiently well-founded (disclaimer: I worked closely with the main developer of ProbDiffEq for many years).
Is this sufficient evidence to keep the part in the paper as it is, or should I soften the statement or add a sentence to support it? I would also like to support it much more strongly with a benchmark, but cross-programming-language benchmarks are somewhat non-trivial and I am not sure how quickly I could come around to creating one, so if we can resolve this part of the issue without such a benchmark that would be greatly preferred from my side currently.
It's nice that you mention related packages in the README. Could you please briefly point out how your package differs from theirs so that people know whether they should look deeper into your package or the alternatives?
I updated the README and extended the text to the following:
- [ProbDiffEq](https://pnkraemer.github.io/probdiffeq/) is similar in scope to ProbNumDiffEq.jl and it provides fast and feature-rich probabilistic ODE solvers but is implemented in Python and built on JAX.
- [ProbNum](https://probnum.readthedocs.io/en/latest/) implements a wide range of probabilistic numerical methods, not only for ODEs but also for linear algebra, quadrature, and filtering/smoothing. It is implemented in Python and NumPy, and it focuses more on breadth and didactic purposes than on performance.
Does this add the desired information and resolve your concern? If you have thoughts on how to improve it, please do let me know.
Some citation references in the docstrings do not seem to work properly
Thank you for catching this! For the citations I used DocumenterCitations.jl, which I found to provide very pleasant results on the documentation website, but I completely missed how it gets rendered in the REPL. There is an open issue here that relates to this question, and essentially DocumenterCitations.jl seems not well-equiped to render citations well in the REPL at the current time. The way I see it, there is no way I could keep the way references are currently done with numbers (e.g. [4] and [5] as shown here) and with a shared "References" page (here), while also having perfectly readable REPL documentation. I see two ways forward: I could change the entry such that in the REPL the citation gets rendered as
[Krämer et al.] (@cite krämer21highdim)
but then it also gets shown as [Krämer et al.] in the documentation and the numbers do not align with the References section anymore. Or I could keep this part as it is. Personally I would prefer to rather focus on the online documentation, at the cost of some imperfections in the REPL, but I would be very happy about your input here. What would be the best way to resolve this issue to a level that is satisfactory to you?
The benchmark in nathanaelbosch.github.io/ProbNumDiffEq.jl/stable/tutorials/dynamical_odes/#Benchmark:-Solving-second-order-ODEs-is-*faster* uses also different arguments EK1(order=3) vs. EK1(order=4), not only prob vs. prob2. Could you please comment on that in the tutorial?
Good catch! I updated the code to use order=3
in both benchmarks.
There is no CONTRIBUTING.md and CODE_OF_CONDUCT.md or similar
I provide guidelines for third parties wishing to contribute to the software, report issues or problems with the software, or seek support in the Contributing section of the README. As the section is quite short, I thought this would be the most visible location. Is this satisfactory to resolve this part of the issue?
Thank you again for taking the time to review the package so carefully and for your valuable feedback, I appreciate it a lot.
Is this sufficient evidence to keep the part in the paper as it is, or should I soften the statement or add a sentence to support it?
I would be fine if you explained this a bit like above. Maybe you can cite some of the benchmarks of OrdinaryDiffEq.jl etc.
Does this add the desired information and resolve your concern?
đź‘Ť Please remember to merge the PR and update the paper branch accordingly.
Personally I would prefer to rather focus on the online documentation, at the cost of some imperfections in the REPL, but I would be very happy about your input here. What would be the best way to resolve this issue to a level that is satisfactory to you?
That sounds fine. Please open an issue in your repo so that people know that this is a known issue (linking to the issue you mentioned above).
Good catch! I updated the code to use order=3 in both benchmarks.
đź‘Ť
Is this satisfactory to resolve this part of the issue?
Ok, let's keep it as it is.
Thank you for your contribution to the open-source software ecosystem. Feel free to close this issue once you have merged the corresponding PR.
Thank you for the fast and helpful reply!
I updated the text in the paper and in particular I mentioned that both packages include benchmarks that compare to scipy, as those are currently the main evidence for the "similar performance" claim. I also merged the PR and rebased the paper branch on main, and I opened #329 to keep track of the rendering issues of citations in the REPL.
I believe this should resolve the remaining open points, and I will therefore close this issue (but of course if you have more comments please just re-open it). Thank you again for taking the time to review the package and for providing such helpful comments!
This is part of the review https://github.com/openjournals/joss-reviews/issues/7048
Thanks a lot for your contributions 🙂 Please find below some comments based on the review checklist:
[ ] Some citation references in the docstrings do not seem to work properly, e.g.,
EK1(order=3)
vs.EK1(order=4)
, not onlyprob
vs.prob2
. Could you please comment on that in the tutorial?