Open wilfwilson opened 2 years ago
Easiest way to deal with those: not at all! The argument could go like this (I'm not saying this is my stance):
They are not deposited, we already have enough headaches dealing with the packages that we promised to deal with (the deposited ones), we can't afford to spend efforts on additional packages.
But let's say we want to test at least some additional packages; e.g. because they are candidates for depositing, or simply because we care about them for some reason.
Then we could "simply" "install" them recursively (we have a script for that in PackageDistroTools already). That takes care of "internal" dependencies.
As to "external" dependencies: Things like "rust" I'd install on-the-fly in the containers whenever reasonably possible, via apt-get install rust
or so; that means we'd need some way to encode these requirements in a uniform way, and then act upon it. We could have a file somewhere which maps GAP package names to lists of Ubuntu packages needed by those. In fact, we could use that list for deposited packages, too. Then it could be either used to install dependencies on the fly, or else as input for the scripts that generated the docker images...
Some undeposited packages require other packages that are also undeposited. Therefore they do not exist in the Docker containers, and so they must also be installed as necessary. This is not currently done, and so the tests for such packages fail.
An example of this is BacktrackKit, which requires QuickCheck (and GraphBacktracking, which requires BacktrackKit...; and Vole, which requires GraphBacktracking...).
In addition, some packages have other dependencies that are not fulfilled in the current setup. The only example I have is Vole, which requires Rust in order to compile the package.
How should we deal with these problems?