Closed lemmy closed 1 year ago
My initial thought is that wall-clock time isn't terribly useful and quickly outdated anyway. Instead, we should encourage contributors to integrate new examples into our build automation, which can timeout, and, thus, fail the PR if a proof or model-checking with TLC or Apalache takes too long.
Could change runtime
to be size
with possible values small/medium/large. Small models must run in less than ten seconds on an ordinary computer built within the past ten years (so on the github CI hosts), medium models less than ten minutes, large models are unbounded. The small models can even be added to the CI to ensure that specs added here are capable of being model checked without errors.
Here's another idea: instead of having both a manifest.json and a summary table in the README, the README summary table can just list beginner-friendly specs. If people are looking for specs for other reasons they can just read the manifest.json (itself human-readable enough) or browse the repo itself.
If technical obstacles prevent sourcing README.md
from a manifest.json
, we should accept the redundancy in the interest of beginner-friendly experience. @muenchnerkindl What do you think?
I agree that having a manifest.json along the lines that you suggest would be useful (it would be good to include features related to Apalache such as type annotations, models for bounded model checking or for checking inductive invariants). At first glance it appears to me that it should be possible to generate a table such as the one in the current README.md from the information in the JSON file, but if not we should probably have both at the risk of inconsistencies.
I can also have the CI workflow auto-update the README.md table from the manifest.json, so only the manifest.json should be human-editable while the README is machine-generated. It's a bit messier but should be doable. I am partial to the idea of just having only a curated list of beginner specs in the README, though. I don't think a giant markdown table of all specs in the repo is super useful.
I agree it would also be useful to add apalache and snowcat related feature flags.
I've written a fair number of the validation scripts and am now engaged in the process of adding all the specs into the manifest.json. I have two design concerns:
CarTalkPuzzle.toolbox/Model_N
subdirectory with a duplicated CarTalkPuzzle.tla
spec. Should I rewrite the cfg files so they can live in the same directory as the main spec and avoid duplicating that spec?external_links
directory instead of the specifications
directory; thoughts? I would then want to pare these down over time by actually moving the linked spec into the repo, licensing allowing.Finished first draft of the manifest work. I like submodules but we do need to consider that broken links are a possibility; if the repo is moved or deleted (pretty easy to have happen as years draw on) then the specs will disappear. This has already happened for a couple of the links to non-github-hosted specs.
Assuming we have submodules, every clone will automatically create decentralized copies of those repos.
Only if people pass the --recurse-submodules
command to git clone.
Perhaps, Github UI already has the parameter in its clone string. Adding a note to our README.md to include --recurse-modules
when cloning can spread the word, too. Secondly, we can quickly set up a cron job on our Inria machine to periodically clone this repo.
I like submodules but we do need to consider that broken links are a possibility; if the repo is moved or deleted (pretty easy to have happen as years draw on) then the specs will disappear. This has already happened for a couple of the links to non-github-hosted specs.
Experimenting with submodules today: With submodules, our CI will detect broken links whereas lost specs go unnoticed with the current Readme.md files.
Here's an example entry in the manifest.json:
So you have a specification (directory), with one or more modules (.tla files), where each module can have 0 or more models (.cfg files). Similar to the toolbox! I envision a number of interesting pieces of metadata associated with each of these entities that can be searched or filtered by a program consuming this repo:
I will write a CI script that runs to validate the following when a PR is open:
I agree it would not be useful to have this information duplicated in a manifest and a README, which could get out of sync. I will look into whether it's possible to pull this info into an HTML file and have it displayed or something of that sort. It does look like this work would resolve that issue you linked.
Originally posted by @ahelwer in https://github.com/tlaplus/Examples/issues/55#issuecomment-1383242530