osmosisfoundation / pmx-docker

Run nonmem, psn, piranajs, and rstudio stacks as Docker containers
MIT License
8 stars 3 forks source link

PsN Tests Are Not Run #4

Open billdenney opened 7 years ago

billdenney commented 7 years ago

In the current version, PsN tests are not run to verify the installation is successful. This is related to #3 because the NONMEM license is not yet installed, so the tests cannot run.

Generally, I may update my PsN installation more often (approximately annually) than my NONMEM installation, so I may need to update the license file in the PsN installation separately from the NONMEM Docker image. I'd propose a similar fix to the license issue as in #3, and additionally, scripting like the following to run the tests:

  && if [ -n "$NONMEM_LICENSE_STRING" ]; then echo $NONMEM_LICENSE_STRING > /opt/nm/license/nonmem.lic ; fi \
  && cd /usr/local/share/perl/*/PsN_test* \
  && prove -r unit \
  && prove -r system \
  && rm -r /usr/local/share/perl/*/PsN_test* \

The suggested code additionally removes the test directory after completion because it should no longer be required.

BretFisher commented 7 years ago

Once Docker Cloud supports multi-stage builds, we can safely have the license file used in an early stage with ARG and it'll be left out of the end stage! This is what I do with SSH keys now and it's much better than the alternatives we have before this feature was released in 17.05 (edge) but likely won't be in Cloud/Hub for a few weeks until 17.06 (stable) release.

billdenney commented 7 years ago

That does sound helpful, but I am currently running Ubuntu 16.04 LTS on my systems, and bigger pharma are often farther behind. Putting everything at the latest and greatest requirements will often slow adoption within pharma in my experience. (I get bug reports due to a 2-year-old version of dplyr with PKNCA.)

While it's less elegant, I'd prefer to keep the current (less maintainable) version with tests as an option.

billdenney commented 7 years ago

... and I just checked and I'm actually running Ubuntu 16.04.2 LTS which includes docker 17.05, so it'll work for me. I also just read the link on multi-stage builds, and they seem very helpful for situations like this.

When using a multi-stage build, do they fail gracefully when running on older versions of Docker?

BretFisher commented 7 years ago

They fail, but if I remember not gracefully, more likely just telling you invalid command in Dockerfile (multiple FROM lines). We can put in doc's FAQ about the error.

We should likely set a Docker version support matrix in docs as well. As someone who trains people on docker, they often install it wrong and get old apt or yum versions rather then using the officially supported install methods on store.docker.com. I totally understand the old versions scenerio, but it'll be a moving target if we don't say something like "supports docker 17.06 and docker-compose 1.13".

Also should point out for this specific issue, if they use the Hub images there's no issue, as it's just the docker build for local custom images that would fail. I've been approaching this with the default docker-compose.yml file in repo that ppl will use Hub images and not build their own until they know what they are doing (another deisgn goal of mine, "batteries included, but removable").

billdenney commented 7 years ago

Continuing discussion as mentioned at the end of #6.

Philosophically, I prefer images to only exist if they are tested so that they can be used for validated applications with less documentation of the validation required. If they only exist when tested successfully, then existence is proof of testing otherwise, documentation of testing is required. How difficult would it be to convert the current versions to multi-stage builds so that we can maintain the "existence proves testing" mantra?

dpastoor commented 7 years ago

It should be "simple".

Basically, the multistage build allows you to essentially squash prior layers before continuing on, so you can imagine:

build stage 1:

build stage 2:

this way stage 2 doesn't run unless stage 1 passes, however the image people receive will have the license file removed. I think I could sell Bob on being OK with having a license as a env variable on the CI server.

@BretFisher feel free to correct, my exact build stages might be a little off, I have only used the multistage builds for go apps where I just copy the binary from the build step to keep the final image super tiny.

BretFisher commented 7 years ago

The more I think about this, I'm now downvoting the idea of multi-stage. I don't think it's the right fit for this need.

Each build stage you have to copy what you want to keep from one stage to the next. Usually, it's just for copying source code... but in this case, with all the dependencies it would be a mess.

But now that I'm thinking about the solution, what I believe you're asking to do is test that an app was installed correctly.

I think we can actually do that after the image is built in Docker Cloud, but before it's pushed to Docker Hub registry. This would solve the license problem as it would never be placed in an image, and meet your goal of not publishing the image unless it runs those tests with an exit 0.

How does that sound? This should work with minimal effort and a single file in ./psn/docker-compose.test.yml that Docker Cloud will run on each commit.

billdenney commented 7 years ago

You've got 99% of what I'm asking for. I'll give a bit more explanation which will hopefully clarify the need.

I'm a small company (one person), so less time spent on software validation is more time spent on other components. Currently, my software environment is based on the assumption that "if the Docker image exists, it's automatically validated". The proposed solution will indicate "if the Docker image is on Docker Hub, the Docker Cloud tests have succeeded, or it would not be there." But, I don't know a way that I could externally verify that the tests ran. (Please tell me if I'm missing something-- this could easily be something in my ignorance about Docker Cloud.) If I can't externally verify that the tests ran, I will have to re-run them myself.

Two options that I see are:

  1. Can we have the tests optionally built-in to the original build (not what is used in Docker Cloud)? This would look something like what I suggested at the top of this thread.
  2. Is it possible to share the testing logs? If so, I could, download the image from Docker Hub and download the test logs from Docker Cloud, match the image ID, and confirm that the testing was completed.