OpenModelica / OpenModelicaLibraryTesting

Test script for OMCompiler+OpenModelicaLibraries
Other
9 stars 13 forks source link

How to test local/unpublished packages? #26

Closed bilderbuchi closed 1 year ago

bilderbuchi commented 1 year ago

I am trying to get OpenModelicaLibraryTesting to run tests on an unpublished package that resides in a local Git repository. I can get tests to run once, but subsequent runs are skipped because "we already have results for it".

Judging from the ClaRa library config, which seems to load a local copy/version, it's possible to load packages from a local source, so my config looks like this:

[
  {
    "library":"MyModelicaLibrary",
    "libraryVersion":"main",
    "libraryVersionNameForTests":"dev",
    "loadFileCommands":[
      "loadFile(\"path/to/MyModelicaLibrary/package.mo\")"
    ]
  }
]

"main" is the name of the master/trunk branch.

Note: I did not do the "install your library" step, as I don't know how this should work, as OMPackageManager obviously does not know about the package.

When I run ./test.py --noclean path/to/myconf.json it runs tests, and I can generate a report with test results. So this works in principle, this is great! 🥳

The tests surfaced a problem in a test model, so I fix that, commit to main, and run the tests again. Now I'm greeted with Skipping MyModelicaLibrary_dev as we already have results for it: (1671700099, '0.3 (path/to/MyModelicaLibrary/package.mo)', 'MyModelicaLibrary_dev', 'master', 'OpenModelica 1.20.0') 0.3 is the last tagged version (but not the current commit). Note how it says master, not main as configured in the libraryVersion -- is this something else?

I expected this to notice the new commit on main and run the tests for it, but it does not. 😕 My main problem is that I can't seem to re-run testing. Probably I'd need to nuke sqlite3.db, but that would throw away all testing history, which does not seem useful. What am I doing wrong? The project Readme indicates that testing unpublished packages should be possible, but I could not figure out how.

If that issue can be solved, a subsequent question is how to make this test arbitrary checkouts/sha's/dirty state? I nearly always work in feature branches, so never on main. In addition, ideally I want to run the tests (although not necessarily archive results) before finishing a commit, so the git state of my local repo might be dirty. However, the config file seems to want to hardcode a branch with the libraryVersion key? What do I have to put into the config file so that the tester picks up the current local commit branch/sha/status, e.g. from git describe --tags --dirty? By using loadFile above, the tests should already run on the current state, but the report does not seem to notice that at all. The ClaRA_dev test runs seem to pick up an sha, but I could not figure out how this happens (it's not in https://github.com/OpenModelica/OMPackageManager/blob/master/repos.json), or how to get this for a locally available library.

Note: all this happens in a Docker container that MyModelicaLibrary is mounted into in a sibling directory to the cloned OpenModelicaLibraryTesting repo (dependencies and OM installed, of course). I don't think that that has any bearing on the present problem.

casella commented 1 year ago

Maybe @AnHeuermann can help, I understand he did it on his machine.

AnHeuermann commented 1 year ago

What am I doing wrong? The project Readme indicates that testing unpublished packages should be possible, but I could not figure out how.

From what I understand you were successful in testing your local library.

I had the same issue, that tests won't run twice in a row (I changed omc instead of the library though, but didn't commit). My hack was to remove the check in the test.py.

In addition, ideally I want to run the tests (although not necessarily archive results) before finishing a commit, so the git state of my local repo might be dirty.

Search for the part where it stops and remove it / hack the test.py until it works for you. But I never tried to understand test.py, so I'm afraid I can't hep much with this.

bilderbuchi commented 1 year ago

Thanks, that helped! The minimal change that I ended up using was putting the output of git describe --tags HEAD into conf["libraryVersionRevision"] if the config file has "libraryVersion":"HEAD". This at least takes care of testing individual commits between library versions.

I have now realised that the history-keeping and -reporting seems heavily oriented/hardcoded around OM's history (as I could have expected), not the libraries'. I'm not sure anymore if working around the various limitations is even worth it, and if it's not actually better to start from a clean slate/container for every test run (so with an empty database), and only try to get a html report of the last library test run out.

sjoelund commented 1 year ago

It looks at changes to both library and OM version. We do test OM versions that are kept frozen and run updated libraries on this in order to see changes.

It's not a problem to keep running OM 1.20.0 or 1.19.0 for a long time and only then updating OM version (and perhaps seeing lots of changes to the coverage for that single run)

bilderbuchi commented 1 year ago

With "updated libraries" you mean from one library version/tag to the next, correct? As-is, I could not make it run tests on a new library commit, because the conf["libraryVersionRevision"]=omc.sendExpression('getVersion(%s)' % library) part stayed unchanged, leading to skipped tests. Hence the change I sketched above, which made that work, as libraryVersionRevision now can be more fine-grained than just the Modelica package version.

bilderbuchi commented 1 year ago

We do test OM versions that are kept frozen and run updated libraries on this in order to see changes.

I'll have to take a closer look if I can convince all-reports.py with the appropriate arguments to spit that out locally, without e.g. a relevant --githuburl etc. 👍

sjoelund commented 1 year ago

The normal way we use is put metadata into the library with our package manager. So it doesn't use the annotation, but rather whatever you put in the metadata. Either a file Modelica 3.2.3.last_change or Modelica 3.2.3/openmodelica.metadata.json which would contain JSON:

{
  "sha":"e68d7a0317c565ee00c1b8d44c527979ea0304bf",
  "version":"3.2.3+maint.om"
}
bilderbuchi commented 1 year ago

I could confirm that by using the openmodelica.metadata.json file as advised, I could achieve re-running tests with different commits/shas, so I think this could be closed.

For some reason, in the config.json, I needed my "libraryVersion":"", otherwise loading the package failed, but I have not conclusively tracked that issue down -- something for later.