RuleBasedIntegration / JOSS-Publication

Text-sources for the publication of Rubi in the Journal of Open-Source Software
MIT License
4 stars 0 forks source link

Versions / Tags for the repositories associated to the publication #2

Closed rljacobson closed 5 years ago

rljacobson commented 5 years ago

This JOSS submission covers the following content:

The version of Rubi under review is v4.16.0.4. The developer wiki explains that this version number includes both the integration rules (the "Engine") and the Mathematica package (the "Interface") simultaneously:

The first 3 numbers represent the Rubi engine version which is the version of the integration rules that are Albert's responsibility. The last number is the version of the interface, which is the package code itself that is required to load the rubi rules, provide Steps/Stats, and to format expressions.

I recommend the following version-related issues be resolved:

rljacobson commented 5 years ago
halirutan commented 5 years ago

@rljacobson

The developer wiki explains that this version number includes both the integration rules (the "Engine") and the Mathematica package (the "Interface") simultaneously

Correct. Albert Rich had a XX.XX.XX version number for the Rubi engine and wanted to keep it that way. However, we needed another indicator for the version of the Mathematica user interface which is decoupled from the rules.

Your suggestions:

Suggestion 1

I checked several JOSS publications and no one referenced the version explicitly. However, the links that are provided on the left side of the final pdf point to (a) the current repository and (b) the archived version which is exactly the 4.16.0.4 release (I had to give this tag when submitting the paper).

Suggestion 2

Yes, you are correct. I have rebuilt the PDF files for the current 4.16.0 Rubi engine and there is now a release for the PDF file catalog that is up-to-date and has the same version number as the Rubi engine.

Suggestion 3

Likewise, I tagged the commit from Aug 03 since this was the version we used to test Rubi 4.16.0. I'd like to postpone the tagging of the other test-suites until the next release since the latest comparison with other CAS was done in early summer and I am not entirely sure if Nasser used this version. However, the tests are only growing and their main purpose is to ensure that we have no regressions when Albert Rich implements new Rubi rules.

We are currently discussing if we can acquire someone who can help to implement the very detailed integrator test program of Albert Rich for other CAS. This is a very tricky business because Mathematica is strong in symbolic computation and the question is what do you do when the built-in integrator gives a result but it is different from the optimal antiderivative? The trivial solutions to this are unfortunately sometimes hard to implement:

  1. You can try to simplify the solution to the known antiderivative. That's very often impossible.
  2. You can try to derive the result and simply it back to the input. That is a viable solution but it strongly depends on the performance of functions like Simplify. However, even this path is sometimes not possible. If you have Rubi, you can use the example from the paper to see the complexity of this:
input = (Sec[x]^2 + Sec[x]^2*Tan[x])/((2 - Tan[x])*Sqrt[1 + Tan[x]^3]);
D[Int[input, x], x] == input // Simplify

(* True *)

Now, try this when you use Mathematica's Integrate instead. So the primary purpose of the integration tests is to ensure we get correct results for Rubi and we don't have regressions.

Suggestion 4

I hope this is not necessary. Rubi only provides 3 user functions Int, Steps, and Stats. These haven't changed in years and it's only this high-level user documentation that is provided on rulebasedintegration.org. If something changes in the implementation then it will be documented in the Rubi repository but it should not affect the user interface. Additionally, I dearly hope the readers of the manuscript will check the Rubi website in 2025 and not download version 4.16 from ancient history.

halirutan commented 5 years ago

Addition to Suggestion 4

To give my opinion on suggestion 4 some more weight: A similar case can be found in the fourth newest JOSS publication where the manuscript points to a specific release, but the user documentation lives separately on https://prestsoftware.com/ for the most recent version and it is cited in the manuscript. So while the archived version points to 0.9.8, the online documentation is at version 0.9.11. I completely understand your point with fixing the documentation version to the archived version, but I hope we can count on sensible users.

I'm not entirely sure, but I believe the main reason for using a tagged archived version is to get a DOI from Zenodo, which requires a GitHub release.

rljacobson commented 5 years ago

Suggestion 1

I made this suggestion on the basis of the following reviewer checklist item:

Does the release version given match the GitHub release (4.16.0.4)

The GitHub release is 4.16.0.4, and if you provided the 4.16.0.4 tagged release with the submission, then I agree that this checklist item is satisfied. Thanks!

Suggestion 2

Resolved.

Suggestion 3

My suggestion to version the test suites is partly motivated by what might be a misunderstanding of their role in Rubi. If they are intended only as tests for Rubi itself, then it is probably not as important that they are versioned except as an indicator to the development team internally of which Rubi version passed which set of tests.

But I am under the (possibly mistaken) impression that the test suites are used to compare the performance/functionality of Rubi against that of other CAS. It is this use case that I think necessitates versioning of the test suites. But if this used case is not intended to be a supported use case of the test suites, then I downgrade my suggestion from "recommended to be resolved" to "think about considering whether it's right for your project," and would consider this part of the issue resolved. :)

If I understand you correctly, you have tagged the test suites used in Nasser's performance report and upon which the performance claims on the website/in the documentation are made. Identifying the test suite used specifically for these claims is sufficient in my view. Moreover, I now see that Nasser's performance report already includes download links to the specific test suites he used, so even a tag in the repository is not strictly necessary in my view (even if I personally think it's a good idea).

Suggestion 4

I am persuaded by your well-reasoned points and retract this suggestion.

halirutan commented 5 years ago

@rljacobson Thank you for being generous in understanding my points. As I said earlier, the test's main purpose is to verify Rubi and the tagged MathematicaTestSuite shows the state which was used for Rubi 4.16.0.4; the version we want to publish in the manuscript.

The system for comparing different CAS is not in its final form because for many CAS we don't have a verification phase (proving that the antiderivative is correct) of the found result (I quote Nasser):

A verification phase was applied to the result of integration for Rubi and Mathematica. Future versions of this report will implement verification for the other CAS systems. For the integrals whose result was not run through a verification phase, it is assumed that the antiderivative produced was correct.

This is what I meant to be "fair" to other systems. In addition, the grading of the results, which is essentially comparing the results complexity to the known, optimal antiderivative is currently only available for Mathematica, Rubi, and Maple. This is the reason why we have recently switched the graphics on the website to show only these three systems instead of the whole bar-chart

img

Here, it might appear if e.g. FriCAS was better than Maple but in truth we simply could not grade the results for the other systems.

My hope for these comparisons is that we can (a) acquire some more manpower, especially people who bring Rubi to other systems and (b) generate some positive energy in the form of "why can't we do this?" This seems to work when you read messages like this one on the FriCAS forum or when you hear in one of the recent Wolfram Twitch streams that Wolfram is including some of Rubi's rules in their upcoming version.

rljacobson commented 5 years ago

My hope for these comparisons is that we can... generate some positive energy in the form of "why can't we do this?"

Which is why I think your test suite is a significant part of your project. The grading/verification problems aside, one could use them to compare computer algebra systems against one another independent of the Rubi rules.

Those two examples in your last sentence really demonstrate the advantage of open source software well, in my view. Both examples are quite encouraging.