testcontainers / testcontainers-java

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.
https://testcontainers.org
MIT License
7.97k stars 1.64k forks source link

Kubernetes Support Revisited #1135

Closed deas closed 2 years ago

deas commented 5 years ago

Testcontainers depends on Docker, and raw Docker is an issue in a Kubernetes managed environment (i.e. Jenkins X). It ends up either using the /var/run/docker.sock escape hatch or dind. Both approaches have issues. Would you like to be aiming at adding native Kubernetes support? Native Kubernetes support would even make Docker optional in a Kubernetes environment.

See also: https://github.com/testcontainers/testcontainers-java/issues/449

jstrachan commented 5 years ago

it’s easy to create containers as Pods in kubernetes using a java client for kubernetes- like this one: https://github.com/fabric8io/kubernetes-client

bsideup commented 5 years ago

Hi @deas & @jstrachan,

First of all, I'm sure @rnorth @kiview have something to add and might have an opposite opinion, please don't read my answer as a strong position of the whole team, it is just my perspective on it :)


We, of course, want to support as many platforms as possible! But we also need to keep focus and understand the problem we're solving.

Native k8s support is something we hear quite often. But does it actually solve the problem? Or is it the problem?

Just a theoretical example: if we focus on integrating Testcontainers with runc (without Docker daemon) and root-less containers, or Kata Containers, it will work locally, on CI environments, and on k8s. Because by doing it we will solve the problem of a Docker daemon requirement.

But if we add k8s support (worth mentioning that k8s is not a container engine like Docker, but orchestrator), we will have to support two completely different ways of spinning up containers in one code base. If somebody wants to volunteer himself to contribute & support it - that could be an option, but so far nobody did, and it means that our small team will have to develop & support both "ways".

So, do we solve a problem by supporting k8s APIs or add another one?

@jstrachan

it’s easy to create containers as Pods in kubernetes using a java client for kubernetes

That's true. But starting containers is a very small part of Testcontainers.
We could adapt the DSL to some extent (although we do use some Docker specific APIs in a few places). But the amount of limitations is huge. Networks can't be done the Docker way, file mounting will not work as expected (or not at all), and many more tiny details which are hidden behind Testcontainers, accumulated after years of development.


To keep the conversation going, I suggest we first define the problem and find how we can quickly solve it :)

kiview commented 5 years ago

I agree with @bsideup regarding splitting the actual issue at hand:

  1. Supporting container engines beside Docker
  2. Supporting orchestrators

1 is definitely something I'd like keep in mind when going forward with Testcontainers (at least having an internal software architecture that allows for other engines). As far as I understand, this would solve issues with JenkinsX, wouldn't it?

2 is something I could see as a different module (like we already have now with docker-compose support), but probably not something that will be built into Testcontainers core, like the container engine abstraction.

deas commented 5 years ago

Ok, so here is my story.

Testcontainers was saving my ass wrt integration testing which requires a bunch of containers for me (on the dev box). Unfortunately, Kubernetes introduced new challenges. I just had a quick glimpse and I know pretty much nothing about Testcontainers other that it uses docker-compose under the covers.

At first I thought it should pretty much boil down to swapping docker-compose with kubernetes equivalent calls. Well in fact, they already made docker-compose play with kubernetes: https://github.com/docker/compose-on-kubernetes . Helm may be another alternative to spin things up.

Other than spinning things up, I guess you guys do various docker calls to check the state of containers and do other things, right? Hence, appears there is other functionality which should be implemented if we are aiming at parity with docker-compose.

bsideup commented 5 years ago

Testcontainers does not use Docker Compose at all :) We have a module for it, but the core and every other container are implemented with Docker API and only it :)

kiview commented 5 years ago

@deas I kind of like your narrative here:

At first I thought it should pretty much boil down to swapping docker-compose with kubernetes equivalent calls. Well in fact, they already made docker-compose play with kubernetes: https://github.com/docker/compose-on-kubernetes . Helm may be another alternative to spin things up.

This is exactly something I want to look into for some time now (but sadly didn't have the time yet), a common abstraction between Docker Compose, Docker Swarm Mode and Kubernetes, relying on the existing abstraction built into recent Docker Compose versions.

As @bsideup has mentioned, the current Docker Compose support is already implemented as its own module and we might be able to come up with something similar for the other orchestrators.

However, Docker Compose support already doesn't have real feature parity compared to using Testcontainers directly, since we are adding an additional layer of indirection, so I would expect the same for other orchestrator implementations.

deas commented 5 years ago

Don't want to open a can of worms, but is it actually still reasonable to work on the container/pod/microvm level in Testcontainers?

kiview commented 5 years ago

I don't really get this question, could you clarify a bit more?

People start to use Testcontainers for all kinds of use cases and we are really happy see this. We also try to support as much of them as reasonable possible.

However, we also have to look at the history and main use case of Testcontainers, which is IMO (the opinion of the other team members might differ) white box integration testing. This is where Testcontainers is strongest and where I also see the most development happening (and it's also the most probable way how people start to use Testcontainers).

Then there is an increasing amount of user which use Testcontainers for black box integration testing as well as for system testing or acceptance testing. This is also great and definitely a desired use case. However, the maximum size of those tests and systems should be as such, that they are still runnable on a local developer machine. That's how we mainly think about Testcontainers, a tool to give developers fast feedback while developing with the added benefit, of also having the same tests executed in your CI environment without any additional setup or environment duplication necessary (and we do our best here, to support as many CI systems as possible).

I'm not sure if this answers your question, but it explains why I generally struggle to see the need for actual Kubernetes integration. Still we are generally open to ideas and would greatly appreciate contributions from the Kubernetes community in order to tackle those topics.

Regarding your other question:

Other than spinning things up, I guess you guys do various docker calls to check the state of containers and do other things, right? Hence, appears there is other functionality which should be implemented if we are aiming at parity with docker-compose.

We are using the Docker API for multiple different features, like file mounting/copying, executing commands in containers, networking, etc. Another big part of the Testcontainers UX are ẀaitStrategies (blocking test execution until the application in the container is ready, not just the container is running), but since they can also work in a black box way, this concepts could probably be adopted for orchestrators.

deas commented 5 years ago

Again, just been jumping into Testcontainers so you can be sure I am missing a bit.

My use case was: Run maven integration tests on that composite. I was dropping a docker-compose.yaml and everything was fine on my dev box. I use that very same file to spin up that composite and work on the system locally. I was very surprised how easy my problems were solved so far.

Hence, I was wondering why there is a need to deal with the parts (containers/pod/microvm) of that composite. And in fact WaitsStrategies and things along those lines fall in that category. Still not sure whether some of the functionality could also be covered by a tool at the composite level (e.g. docker-compose or helm).

postulka commented 5 years ago

We would also like to see some module for kubernetes orchestrator, similar to docker compose module ... maybe support for helm? Our use case is that we want to run some integration / automation tests and for that we need to spin up quite a few different services and some of them can be quite heavy (not all the processes are microservices) - depending on test the environment we need to spin up can get quite complex and therefore it is not very feasible to run it on single machine. Having support for kubernetes would resolve this for us because kubernetes would scale the cluster up and down and distribute the resources as needed.

tonicsoft commented 5 years ago

For our use case (Gitlab CI builds on AWS using kubernetes runners with docker installed) I believe it should be enough to add --network=host CLI parameter to all docker calls (or equivalent if CLI is not used by TestContainers. Is this something that is currently possible or would be less controversial to add? Perhaps it merits a separate feature request?

jzabroski commented 4 years ago

Just a theoretical example: if we focus on integrating Testcontainers with [runc](https://github.com/opencontainers/runc) (without Docker daemon) and root-less containers, or Kata Containers, it will work locally, on CI environments, and on k8s. Because by doing it we will solve the problem of a Docker daemon requirement.

No, you got it wrong. Don't focus on integrating anything just yet. Focus on designing an API that people can use to plug-in whatever integrations they want. Lay out a roadmap so that people understand what needs to be done. Then they will do it. The average person probably wants to help, but doesn't even know where to look. Adding a "help-wanted" tag does nothing to facilitate, mentor, and advocate for that average person to go help you. What really is needed is not help, but guidance/resources to get started. Otherwise, you might as well tag this "serenity-now" for those of us with brittle integration tests and searching for solutions.

And this sort of roadmap should be done at a level higher than testcontainers-java or testcontainers-go. It's literally the blueprint for the whole project.

imochurad commented 4 years ago

Our company would like to use TestContainers, but lack of Kubernetes support is an issue.

bountin commented 4 years ago

I'd be interested in the expectation of "Kubernetes Support": Is it just about scheduling a pod on a k8s cluster, similar to what testcontainers does now with running a docker container? Or is it more about deployment and removal of any k8s objects (e.g. secrets) to for instance test k8s integration?

bert-laverman commented 4 years ago

Kubernetes support will be very important once k8s 1.18 becomes mainstream because it will by default no longer support docker access to the underlying container platform. Instead, it will go directly to containerd. So any build platform in k8s will not be able to easily run Maven jobs depending on testcontainers.

stormobile commented 4 years ago

The problem with testcontainers and K8s is that TCs give developers an easy way to couple deployment entities together when there is no actual need or requirement to do so. It puts quite a lot of strain on infrastructure/ops teams as TCs adoption forces them to deploy larger VMs/Workers that cost more and are harder to achieve best resource utilization with. For example some of our teams run the test scenario where they need: 1) A service written in Java 2) Kafka 3) Oracle

They are hooked on Testcontainers that spawn everything inside the same root container (or Pod in case of K8s) thus making scheduling harder and driving vertical nodes size scaling (instead of horizontal infrastructure scaling). This becomes even more of a problem with managed cloud runners like the ones provided by Github/Gitlab as they are small (c4r8 tops).

It could be great if testcontainers adopted the benefits of modern orchestration (mainly K8s) by providing an abstraction layer that could preserve the simplicity of configuration/test delivery for developers but with the ability to decouple deployment entities. K8s already features service discovery and isolation (including new cool hierarchy of namespaces from 1.19) that should make this possible.

ralph089 commented 4 years ago

I have found an interesting project that addresses this problem. Has anyone tried this out yet? https://github.com/JeanBaptisteWATENBERG/junit5-kubernetes

Nevertheless, a better integration of Kubernetes in the Testcontainers project itself would be better suited, because a developer could work locally via Docker and in the CI environment via Kubernetes without having to use different libraries/APIs.

rnorth commented 4 years ago

Allowing Testcontainers to work atop a Kubernetes or Docker backend has long been something we would quite like to be able to do.

The question is time. We have a huge amount of other work that we need to do, and Kubernetes support is only a benefit to a subset of users. @bsideup, @kiview and I work on this almost entirely in our personal time, which is limited.

The extent of changes would be so big that I don't think this is something we can throw open to the community to work on either. Thinking about the PR review volume, plus setting up/owning test infrastructure, and then support, it would still place a very heavy burden on us as the core team.

Realistically I think the only practical way forward would be if a company, or group of companies, would sponsor development of this feature. If it's a feature worth having, then I'd hope that this would be a reasonable proposal, and it's one we could explore further. Otherwise, I'm afraid it's likely going to remain as one of those things that we'd like to do, but don't have the capacity for.

jzabroski commented 4 years ago

example some of our teams run the test scenario where they need:

  1. A service written in Java
  2. Kafka
  3. Oracle

This becomes even more of a problem with managed cloud runners like the ones provided by Github/Gitlab as they are small (c4r8 tops).

So... in your example, if you're using GitLab or GitHub, how do you spin up a CI build that launches Oracle, Kafka and your java service? I didn't understand how Kubernetes support solves your problem.

stormobile commented 4 years ago

So... in your example, if you're using GitLab or GitHub, how do you spin up a CI build that launches Oracle, Kafka and your java service? I didn't understand how Kubernetes support solves your problem.

We have to run big workers in CI, but testcontainers could provide the abstraction layer to specify how to spawn services in a target K8s cluster (much like how it is done with compose). All this could be done in CI itself with dynamic environment preparation (with yaml, helm and all other possible K8s deployment options) but the thing about Testcontainers approach is that all the same stuff is done in code which is the main value of the project (not the ability to spawn containers themselves)

jzabroski commented 4 years ago

Is the k8s cluster a virtual setup within the CI build server, or are you deploying the CI build to a k8s cluster with physical nodes? Based on your answer, you didn't answer my question. It matters in terms of who the customers are for your feature request. If the issue is truly that you can't properly integrate everything in the same CI build, then that's "Enterprise solution" territory and not an open source project.

That said, you can look at C# and .net: Micronetes and TestEnvironment.Docker

These are a bit closer to the tech stack you want.

Asgoret commented 4 years ago

Hi to all! It may be little out of scope, so I'm sorry for that ;D My developers come with a problem that TC can't connect to docker.socket due security restrict. We have vanilla Jenkins as a CI\CD tool and using DinD on bare-metal slaves, also we move to generated slaves in OKD|K8S cluster. Is there any opportunity to use TS in secure mode? Without given high privileges to TC containers.

cc @rnorth @jstrachan @bsideup

rnorth commented 4 years ago

@Asgoret it's not an out of scope question - that's the topic of this issue 😄 . Basically, Testcontainers requires a docker daemon in order to be able to launch containers. We don't have a way to launch containers unless you can provide some kind of docker daemon.

Asgoret commented 4 years ago

@rnorth Well... I've read a thread with one eye and can suggest several options: 1) Through jenkins DSL language (I think other CI\CD tools can provide familiar features) 2) Through integration plugins (e.g. jenkins kubernetes plugin) for getting sidecar containers example (it also can be made not by DSL, but through jenkins configuration (I can't find examples, but did it on my own in my jenkins ;D)

So the main idea that TC works in two modes. 1) Admin mode - when TC work with docker.socket by himself 2) User mode - when TC just get names of sidecar containers, didn't create container networks or wait while sidecar containers will raise.

What do you think about it?

Asgoret commented 4 years ago

@rnorth @bsideup is there any opportunity that TC will support CRI (Container Runtime Interface) and will be enabled to use OCI (Open Container Initiative) compatible runtimes (e.g. rkt|cri-o|containerD|Kata containers|etc)?

tomikmar commented 3 years ago

Until this is implemented (if ever), do we have some alternatives to run Testcontainers without DinD (--privileged mode) or DooD (/var/run/docker.sock) ?

kamkie commented 3 years ago

one alternative is podman https://github.com/testcontainers/testcontainers-java/issues/2088

it is getting better

bsideup commented 3 years ago

Or Rootless Docker that you can use already today, does not require learning a new tool and 100% API compatible with the regular one 😉 https://bsideup.github.io/posts/rootless_docker/

rnorth commented 3 years ago

@Asgoret sorry for the slow response, and thanks for those suggestions, but I really think paras 2-4 of my earlier comment would apply!

maitredede commented 3 years ago

Hi,

I would like to present some use cases I would be happy to see.

One case is full application integration tests. For example, I have an app, with 3 instances of zookeeper, kafka, etcd, elasticsearch... And I would like to build full testing campains against same replication configuration. The only problem is that all these replicas do not fit on one machine. So kubernetes seams a good orchestration tool.

Other case, multiarch testing. With kubernetes, you can set node affinity against arch, so testing some parts of my full app on other arch could be usefull. Maybe I can already achieve this by targetting directly docker server on required machine...

On the other side, if you need to test something against kubernetes, but you only have docker, you can try to use kind (kubernetes in docker)...

bert-laverman commented 3 years ago

Has anyone weighed in on Kubernetes v19 and the step away from Dockershim? This has nothing to do with supporting Kubernetes next to pure Docker, but everything with running a build in Kubernetes expecting you get a Docker API so TestContainers works. I know (from remarks, not from my own experience) there are alternatives, but that just means I need to build custom container images with additional content, instead of using e.g. the standard Maven ones.

AndreiYu commented 3 years ago

Found great solution: use DIND (docker in docker) sidecar container in your POD and specify DOCKER_HOST in your main container. I tried and it works in Kubernetes.

https://timmhirsens.de/posts/2019/07/testcontainers_on_jenkins_with_kubernetes/

sturman commented 3 years ago

Here is the updated Jenkinsfile example https://gist.github.com/sturman/dd1ee54c34bd4a0eee06c25cda23d709

andyholst commented 3 years ago

You can use a starter custom made Docker container image that testcontainers library starts. The starter container creates a Kubernetes cluster in Docker with help of the lightweight Kubernetes distribution tool k3d. The Kubernetes cluster is verified to see if it's up and running by the starter container with custom made health check implementation. The health check logic verifies that all of the deployed pods with related services and ingress controller is up and running before the JUnit tests starts. After the tests has been executed the CI pipeline stops the created Kubernetes containers. Instead of executing the docker kill command in a CI pipeline the command could instead be executed inside a test suit class after all of the integration tests has been executed.

Look at the https://github.com/jsquad-consulting/openbank-spring/pull/21 for further details.

liqweed commented 3 years ago

@andyholst That sounds similar to kindcontainer which we're using. It bootstraps Kind via testcontainers and provides easy access to a KubernetesClient instance configured to connect to the embedded Kubernetes. We use that to test Kubernetes controllers we're writing.

K3d sounds more lightweight, we'll give it a go.

andyholst commented 3 years ago

@andyholst That sounds similar to kindcontainer which we're using. It bootstraps Kind via testcontainers and provides easy access to a KubernetesClient instance configured to connect to the embedded Kubernetes. We use that to test Kubernetes controllers we're writing.

K3d sounds more lightweight, we'll give it a go.

@liqweed I refined the Kubernetes integration tests to use a common k3dcontainer that has access to kubectl and docker CLI. Take a look for inspiration at https://github.com/jsquad-consulting/openbank-spring/commit/4ea5d92d7ddcd11f7ea0d1f79d151c126f9b5de7 and https://github.com/jsquad-consulting/k3dconrainer

okouyad commented 3 years ago

Quick question for clarification, as i'm starting to learn how to deal with this.

I want to use Test container for integration Test, and our CI on Github use the self hosted facility and we run them in Kubernetes. Does that mean that the method presented here https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/ 'Docker wormhole' pattern - Sibling docker containers won't work I need to go full dind ?

sharkymcdongles commented 3 years ago

Found great solution: use DIND (docker in docker) sidecar container in your POD and specify DOCKER_HOST in your main container. I tried and it works in Kubernetes.

https://timmhirsens.de/posts/2019/07/testcontainers_on_jenkins_with_kubernetes/

This is not a longterm solution as docker is deprecated and all big providers will move away from it in the next year or so. If this project is to have longevity it needs to abstract away using the daemon directly. dind is fine historically, but most people won't want to limp along docker nodes just for testing.

https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m

Also dind and other docker based workloads suffer from up at 30% loss in performance due to the way docker works with a shim instead of a native CRI. So in general moving away from docker would not just future proof but also improve execution.

For the record, we currently use the dind approached with GitLab runners, and it works fine however we need to move away from docker hence my ending up here. We already got rid of docker for everything except the runner nodes because we still are forced into dind as of now.

bsideup commented 3 years ago

@sharkymcdongles

docker is deprecated

Docker is not deprecated. Kubernetes no longer uses docker-shim by default and talks to containerd directly, the same way as Docker daemon does.

Also dind and other docker based workloads suffer from up at 30% loss in performance due to the way docker works with a shim instead of a native CRI

this does not apply to dind, but to Kubernetes itself and how Kubernetes schedules containers (hence their change to use containerd directly).


Also, with addition of rootless Docker, running Docker daemon as a pod is super easy nowadays.

sharkymcdongles commented 3 years ago

I am aware however GKE for example will no longer support docker daemon based VMs anymore by default. And for those that want to use containerd or crio you cannot do this with dind based workloads which is my main concern here. Sure the shim is deprecated and will go away but dind is still not going to work across the board with alternative CRIs at least from what I have seen and researched. If you have any docs to counter this I'd be happy to learn and change my view.

On Fri, 29 Jan 2021, 21:28 Sergei Egorov, notifications@github.com wrote:

@sharkymcdongles https://github.com/sharkymcdongles

Docker is not deprecated. Kubernetes no longer uses docker-shim by default and talks to containerd directly, the same way as Docker daemon does.

Also, with addition of rootless Docker, running Docker daemon as a pod is super easy nowadays.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/testcontainers/testcontainers-java/issues/1135#issuecomment-770032465, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG6KEUMX6AQBEFTNH4YPL2DS4MLABANCNFSM4GPSSBFA .

bsideup commented 3 years ago

@sharkymcdongles

GKE for example will no longer support docker daemon based VMs anymore by default

This is not needed to run Docker as a sidecar pod.

dind is still not going to work across the board with alternative CRIs at least from what I have seen and researched

Consider trying the rootless Docker.

orirawlings commented 3 years ago

Are there any examples of how to use testcontainers where rootless Docker is run as a sidecar container in a kubernetes pod? Basically, we're facing an issue where our Kubernetes hosts aren't even running docker anymore or don't otherwise allow mounting the host network and/or host volumes to the CI build pod. So the docker socket (if the docker daemon is even running on the host) is not accessible for the dind patterns mentioned here: https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/

kiview commented 3 years ago

@sharkymcdongles

If you have any docs to counter this I'd be happy to learn and change my view.

Why should the maintainers be obliged to prove anything to you?

For the record, we currently use the dind approached with GitLab runners, and it works fine however we need to move away from docker hence my ending up here. We already got rid of docker for everything except the runner nodes because we still are forced into dind as of now.

Testcontainers is licensed under MIT, so consider adapting the software for your purpose, if your company has strong business needs for it to support other modes of execution. There is always the possibility to contribute changes back upstream.

In addition, consider if your business requirements are worth sponsoring Testcontainers over GitHub sponsors in order to shape the focus of future features, or get in touch with us directly if you want to discuss details with regards to paid development of business-critical features,

sharkymcdongles commented 3 years ago

I contribute opensource all the time and would happily help if this is the plan. Sorry if this came across as a demand. I was just saying this should be on the roadmap but as I am an outsider to the project it isn't my place to shape your direction as a project and team.

On Fri, 29 Jan 2021, 22:19 Kevin Wittek, notifications@github.com wrote:

@sharkymcdongles https://github.com/sharkymcdongles

If you have any docs to counter this I'd be happy to learn and change my view.

Why should the maintainers be obliged to prove anything to you?

For the record, we currently use the dind approached with GitLab runners, and it works fine however we need to move away from docker hence my ending up here. We already got rid of docker for everything except the runner nodes because we still are forced into dind as of now.

Testcontainers is licensed under MIT, so consider adapting the software for your purpose, if your company has strong business needs for it to support other modes of execution. There is always the possibility to contribute changes back upstream.

In addition, consider if your business requirements are worth sponsoring Testcontainers over GitHub sponsors in order to shape the focus of future features, or get in touch with us directly if you want to discuss details with regards to paid development of business-critical features,

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/testcontainers/testcontainers-java/issues/1135#issuecomment-770055590, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG6KEUPSYCPM5N4D5OZA4DLS4MQ47ANCNFSM4GPSSBFA .

sharkymcdongles commented 3 years ago

Okay just to share. I took the advice of @bsideup and was able to get this running with dind-rootless with a bit of pain and hackery via Gitlab Runners.

Perhaps this will be useful to @orirawlings.

This isn't possible via the gitlab-ci.yml approach, so this ticket should fix that. Basically this config works for me:

        [[runners.kubernetes.services]]
          name = "path to your dind-rootless image"
          alias = "dind-rootless"
          entrypoint = ["dockerd-entrypoint.sh"]
          command = ["dockerd-entrypoint.sh", "--experimental", "--storage-driver=overlay2", "--default-runtime=crun", "--add-runtime=crun=/usr/local/bin/crun", "--userland-proxy"]

A few things to note:

  1. containerd 1.3.x has a bug with OOM scoring, so you can only used a dind-rootless image built from 1.19.07 or before which means you will need a custom image sadly. There is a ticket tracking a fix for this issue though here hopefully docker will add the fix soonish to make custom image unnecessary: https://github.com/containerd/containerd/issues/4837

  2. The 1.19.07 image has a bug in the dockerd-entrypoint.sh entrypoint script that needs to be patched:

This line needs to be changed: "$@" --userland-proxy-path=rootlesskit-docker-proxy

To this line: "$@" --userland-proxy-path=/usr/local/bin/rootlesskit-docker-proxy

  1. Building crun into the image and using it over runc makes it perform better on containerd hosts and is recommended. I did this by adding this to my Dockerfile:
RUN wget https://github.com/containers/crun/releases/download/0.17/crun-0.17-linux-amd64

RUN mv crun-0.17-linux-amd64 /usr/local/bin/crun

RUN chmod +x /usr/local/bin/crun
  1. Set the DOCKER_HOST variable to: DOCKER_HOST: tcp://dind-rootless:2375

  2. You can also do TLS if you want to get fancy, but that is well documented elsewhere. I choose not to use it since it makes very little difference if you use TLS if the traffic will remain within the local pod network.

  3. overlay2 will only work on Ubuntu or Debian at the moment. If not using Debian or Ubuntu change overlay 2 in the command declaration to vfs. This will have a large performance hit though, so I recommend using Debian or Ubuntu.

Anyways it is a bit of an effort, but I think it is fine and is prod ready if you don't mind carrying your own dind-rootless images until the patches are finalized upstream. This project uses containers, so if it can run anywhere then I guess upstream support isn't really needed if this can be done above. I assume a similar approach would work for other ci pipeline providers. Cheers!

bsideup commented 3 years ago

@sharkymcdongles wow, great writeup, thanks for sharing! 💯

Petikoch commented 3 years ago

Cloud native builds will become more and more popular in the future IMHO (e.g. Tekton, Jenkins X, ...), this means the actual (e.g. maven or gradle) CI builds typically run somewhere in a Kubernetes pod.

Just to sum up to current state of "Kubernetes support": There are "hacks" to use Testcontainers also in builds running e.g. in Tekton or Jenkins X? But there is no "clean" out-of-the-box support?

Did I get this right? Thank you for clarification.

bsideup commented 3 years ago

@Petikoch

No "hacks" required to run Testcontainers tests in Tekton or Jenkins X. Testcontainers requires an access to a Docker daemon instance, it is up to you how to provide the access and outside of the scope of the Testcontainers project. If your CI system (developed by some company, not FOSS community, btw) is not flexible enough to be able to provide a Docker daemon as a service (similar to how CI systems like GitHub Actions / Azure Pipelines / CircleCI / Travis and many others do), then I would recommend asking them to make it easier to have Docker in your CI pipelines first.

Also, with Rootless Docker, the focus shifted from "does my platform support Docker?" to "I can just configure Docker myself, as a user, no admin rights required". It is a new mode for Docker, but very promising, and Testcontainers works perfectly fine with Rootless Docker.

Last but not least, this ticket is about "make Testcontainers talk to Kubernetes as an engine", not "how to run Testcontainers-based tests in Kubernetes a.k.a. How to start Docker in Kubernetes a.k.a. "There is a million people in k8s community but I would demand the resolution from a FOSS project, even on an unrelated question". There are reasons why adding Kubernetes as a second engine to Testcontainers makes little sense at this stage (see previous comments on this issue). Also, consider asking your sys admin (or any other person responsible for your Kubernetes installation) whether they would be okay giving access to k8s API from inside the CI tasks, he-he :)

Petikoch commented 3 years ago

Thanks for the clarification @bsideup! ❤️

Maybe this is worth one or two sentences in the excellent https://www.testcontainers.org/ documentation?

"Testcontainers requires access to a Docker daemon instance. The daemon instance runs either typically on your local dev machine (e.g. by using the popular Docker Desktop) or on a remote machine ("remote Docker daemon" aka "Docker daemon as a service")".

?

bsideup commented 3 years ago

@Petikoch we already have the "System Requirements" section (that also talks about the various Docker options), and Docker is listed in "Prerequisites" on the frontpage :) Do you still think we need to add more clarification? And, if so - consider contributing 😊