MESAHub / mesa

Modules for Experiments in Stellar Astrophysics
http://mesastar.org
GNU Lesser General Public License v2.1
138 stars 38 forks source link

Part 1: automating documentation/images for the test suite #681

Closed pmocz closed 2 months ago

pmocz commented 2 months ago

The PR is the first part of automating documentation/images for the test suite.

The docs can be updated using:

./each_test_run --update-docs

or a specific problem # as:

./each_test_run --update-docs 41

When run, the script looks for images referenced in the test problem's README and updates them from a successful test problem run (if an .svg is requested, the script automatically converts Mesa's svg output to png using inkscape/scour. svg is preferred instead of png because it is vectorized.). (the test problem needs to generate figures inside a subfolder called pgstar_out/, pgstar_out1/, pgstar_out2/, png/, png1/, png2/ for the script to recognize it). The script also updates the Last-Updated note in the problem's README.rst

This script flag can be incorporated into an automated doc updating strategy (TBD).

I am also adding tags to test problems.

image

Right now, I've only ran the autogenerate on problems 13, 15, 29, and 41, and the rest of the problem READMEs still have to be modified to have inlist_pgstar files / scripted plotting and make sure they generate the required images (preferably svg)

fxt44 commented 2 months ago

interesting. if 5 machines test, which machine takes precedence in docs? the last one? would it be useful to have a "gold" plot (the current accepted "right" answer) and a plot from recent test suite run?

pmocz commented 2 months ago

Right now docs are all automatically updated only if an extra flag is switched on, which is off by default in all the automated testing. So the command would have to be manually run to update the docs, and changes in the docs manually committed to the git repo

Maybe we could designate a machine to routinely update the images? I like the idea of having a gold standard as well!

pmocz commented 2 months ago

Another option could be to update the figures in the docs only with each version release (and make sure they are up to "gold-standard" quality), and otherwise save images from running tests just in a log server

fxt44 commented 2 months ago

... changes in the docs manually committed to the git repo

this part sounds like a lot of recurring hand-care and feeding, but maybe i misunderstand. could this doc update step be more automated? otherwise, i'm happy to do doc update runs.

the idea of showing "current accepted answer" and "recent run" plots (thus doubling the size of most/all pages) is to allow a quick chi-by-eye comparison. however other alarm bells should have gone off long before these comparison plots would show significant differences, so maybe its not so useful an idea to show both.

pmocz commented 2 months ago

this part sounds like a lot of recurring hand-care and feeding, but maybe i misunderstand. could this doc update step be more automated? otherwise, i'm happy to do doc update runs.

The git add/commit could be entirely automated if we wanted to do that, and flag to update docs enabled by default. There are a few different strategies we could do, which merit discussion:

Strategy 1: automatically include a "gold standard" and "latest" version of plots in the docs in the github repo.

Strategy 2: include a "gold standard" version of plots in the docs, which are the plots created with the latest release version. Store images from tests as artifacts on a server (for 90 days) which can be compare against the "gold standard" via the TestHub

Strategy 3: other strategies?

I opt for something like Strategy 2 to keep things simple and clean for users, and not commit too much image data to the git repo

fxt44 commented 2 months ago

about those "images" in the documentation for the test suite cases.

many of these date from my 2021 summer-of-docs. because they were going to be public facing on the web, i wanted the highest quality pgstar output. this meant no rasterized image formats, and some manual work to do.

for each test suite case, i set pgstar to output postscipt with a black background. i then edited the vector figure in illustrator. often i reduced the number of control points defining a vector curve in order to reduce the rendering speed of the vector figure. for example, one doesn't need 20000 time points to define a flat line for an abundance that is constant. this led me to some research on automatic point removal algorithms that summer. i then saved the figure in vector pdf and in svg formats for web consumption.

the resulting svg files are what one sees in the test suite docs. they look good because their figures are vector. if we decide to change to a rasterized image files, then one consequence will be that the public facing docs will not be as sharp.

pmocz commented 2 months ago

the resulting svg files are what one sees in the test suite docs. they look good because their figures are vector. if we decide to change to a rasterized image files, then one consequence will be that the public facing docs will not be as sharp.

I prefer svg (vectorized) files as well. In that case I will probably need to use some open-source tools like inkscape/scour to automatically get their file sizes small using the command line.

fxt44 commented 2 months ago

oh sure, there are command line tools to easily go from ps to pdf & svg and we could automate using these. the real issue is the size (thus rendering speed) of the pgstar output. every time point for every curve in history plots and/or every mass point for every curve in profile plots is a control point. put a few of these plots in a grid plot and the file size (download speed) and rendering time get much larger than what i think is reasonable for a public facing svg (or even a journal article!)

earlbellinger commented 2 months ago

Nice! Love the tags.

Automated pgplots would allow us to see how things change over time with different changes to the codebase, so I think there is a lot of value there.

I think it would also be great and very handy if the test suite documentation pages would show the contents of all the inlists (perhaps in collapsible blocks), and not only the pgstar inlist!

wmwolf commented 2 months ago

I'm not sure if this would simplify or complicate things further, but would it make more sense to just ship a matplotlib script inside each test case that creates desired figures at the end of a run? I believe matplotlib already has algorithms for creating vector plots that attempt to simplify curves with many control points. This adds a dependency (python + matplotlib), of course, but it's not a very ridiculous one.

I really like the idea of making these accessible from the test hub using the logs server. In principle, it could work very similarly to how we handle the full test outputs already.

pmocz commented 2 months ago

I'm not sure if this would simplify or complicate things further, but would it make more sense to just ship a matplotlib script inside each test case that creates desired figures at the end of a run? I believe matplotlib already has algorithms for creating vector plots that attempt to simplify curves with many control points. This adds a dependency (python + matplotlib), of course, but it's not a very ridiculous one.

I really like the idea of making these accessible from the test hub using the logs server. In principle, it could work very similarly to how we handle the full test outputs already.

Using Python/Matplot lib to generate some plots could be a good idea. Maybe we could try it out on one or two test problems and see how widely it should be adopted? Users may find it useful too to see examples of how to use Python to visualize the Mesa output. I think including pgstar images makes sense too, because we already generate them, and it's what users see while running the code. (Only tricky thing is that auto-generating compressed svg files needs some extra dependencies like inkscape/scour, but I do prefer that to .png images)

I'm leaning towards pushing images into a logs server too, rather than updating the docs every time. You're right, it could use very similar infrastructure to what we already have for the json server!

evbauer commented 2 months ago

Unfortunately Read the Docs failed to build on main after merging this, so I think we need to revert for now. It looks like it's unhappy with some of the new requirements.txt entries: https://readthedocs.org/projects/mesa-doc/builds/25058802/

I've activated this branch (hidden) on Read the Docs so you can continue pushing to it to test Read the Docs builds, but it won't show up in the version selection for users.

evbauer commented 2 months ago

Also, given that we have a public release coming up very soon, it might be better to hold off on merging changes of this scale until after the release.