Open joholl opened 3 years ago
I've spun up machines like this for testing the SE Linux kernel in Travis. However, last I knew GH CI runners didn't have nested virt turned on.
@joholl You did only enable the unit tests. Was it possible to start tpm_server and the integration tests?
@JuergenReppSIT When I did this, I only needed the unit tests. Although I (unnecessarily) built the mssim simulator (tpm_server
), I did not run the integration tests. For that, I would have to had to install more dependencies.
With issue #2649, I revisited this.
run-on-arch-action
I tried uraimo/run-on-arch-action, which is qemu-based (but idk how exactly). Currently I am struggeling with the slowness (granted, based on a vanilla ubuntu22-04).
ubuntu22.04 s390x
The job running on runner GitHub Actions 3 has exceeded the maximum execution time of 360 minutes.
Another alternative would be multiarch/qemu-user-static which seems to be even more promising. Basically, with docker run --privileged
it registers qemu-user-static binary interpreters in the host system. We could then directly run our foreign-arch docker images (assuming we switch to multi-arch builds):
ARG ARCH=
FROM ${ARCH}debian:buster-slim
# rest of Dockerfile
I could run all integration tests on s390x/ubuntu:jammy with the following branch: https://github.com/JuergenReppSIT/tpm2-tss/tree/test-use-libtpms-on-big-endian I will create a pull request after main-fapi.c will be adapted to the latest rework of the tests. The docker image used for the tests was created with:
docker pull s390x/ubuntu
sudo apt-get install -y qemu-system-s390x
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker build -t s390x_ubuntu_jammy_update -f s390x-ubuntu-22.04.docker .
The docker file for the update was created from: https://github.com/JuergenReppSIT/tpm2-software-container/blob/s390x-ubuntu-22.04/s390x-ubuntu-22.04.docker.m4
Cool!
I'm surprised integration_tcti
in your configure.ac
is not overwritten by the tcti autodetect further down:
In any case, I wanted to have something like this for the configuration step anyway. We might also want to replace --with-device=/dev/tpm0
with --with-integrationtcti=device:/dev/tpm0
later.
What is missing now is the integration in our ci. I've played around with this as well. In my case I tried to separate distro and arch a little bit into a (half-baked) matrix build. In any case, we need something like this in tpm2-software/ci.
https://github.com/joholl/ci/blob/5a1b3cb02a129865fbf3b0238589a55094d3a0bf/scripts/ci.sh#L43-L60
I could not get it to work with tpm_server/swtpm, but here my efforts so far.
https://github.com/joholl/tpm2-tss/commits/ci_arch
https://github.com/joholl/ci/commits/main
https://github.com/joholl/tpm2-software-container/commits/master
This resulted in a failed run (some socket stuff was not working). But hey, it was in Github Actions.
I don't think we explicitly support big endian platforms (correct me if I'm wrong). As it turns out, the TSS does not only build and run on big endian machines - apparently, we are packaged for such platforms, as well. At least for me, that was a surprise.
However, we don't have a CI big endian platform, which makes it hard to reproduce and fix issues (see #2125). When fixing the tcti-pcap, I had to set up a qemu machine with an exotic processor architecture from scratch.
Therefore my RFC: what do you think about spinning up a big endian platform in the CI pipeline? Not only would we prevent regressions in the future, but also we'd have a "documented" (at least in code :D) way on how test the TSS on big endian platforms. (If we explicitly support big endian platforms is a separate question, though).
As for the technical part: Since I've already spent some time setting up a s390x qemu machine running alpine (at least to the point where the unit tests run), I think we're half-way there. For reference, see my notes below: