Closed deas closed 2 years ago
In an attempt to workaround this, I am experimenting with this: https://github.com/joyrex2001/kubedock.
This is a limited docker api implementation, that will create deployments for created/started containers in a k8s namespace. Once started it opens up port-forwards for all exposed ports. Volume mounts are supported by copying over the data to an empty volume by using an init container. Logs and execs have (limited) support as well. This together seems to be enough to have simple containers to work fine as native k8s pods.
It seems there is an implementation similar to testcontainers for Kubernetes: https://github.com/JeanBaptisteWATENBERG/junit5-kubernetes
If you want add run testcontainers with tekton; then here you go!
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: task-mvn-test
namespace: testcontainer-poc
spec:
sidecars:
- image: 'docker:20.10-dind'
name: docker
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /var/run/
name: dind-socket
steps:
- args:
- test
- '-Dspring-boot.run.profiles=test'
command:
- /usr/bin/mvn
image: gcr.io/cloud-builders/mvn
name: step-mvn-test
resources: {}
volumeMounts:
- mountPath: /var/run/
name: dind-socket
workingDir: /workspace/source
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: dind-socket
workspaces:
- name: source
@rasheedamir this is amazingly simple, thanks for sharing! 😍
Here is the updated Jenkinsfile example https://gist.github.com/sturman/dd1ee54c34bd4a0eee06c25cda23d709
Setting to the maven container
envVars: [ envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375'), envVar(key: 'DOCKER_TLS_VERIFY', value: '0') ])
and
Setting to the docker in docker container
envVars: [ envVar(key: 'DOCKER_TLS_CERTDIR', value: "") ])
you save my day, Thank you
Hi,
Thanks a lot. The issues https://github.com/testcontainers/testcontainers-java/issues/700 and https://github.com/testcontainers/testcontainers-java/issues/1135 helped me, to get testcontainers running in Jenkins with dind-rootless
inside Kubernetes.
In hope, this might help someone else, a stripped down example for a Jenkins build job pod definition:
apiVersion: v1
kind: Pod
spec:
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: dind
image: docker:20.10-dind-rootless
imagePullPolicy: Always
env:
- name: DOCKER_TLS_CERTDIR
value: ""
securityContext:
# still needed: https://docs.docker.com/engine/security/rootless/#rootless-docker-in-docker
privileged: true
readOnlyRootFilesystem: false
- name: gradle
image: openjdk:11
imagePullPolicy: Always
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
# needed to get it working with dind-rootless
# more details: https://www.testcontainers.org/features/configuration/#disabling-ryuk
- name: TESTCONTAINERS_RYUK_DISABLED
value: "true"
command:
- cat
tty: true
volumeMounts:
- name: docker-auth-cfg
mountPath: /home/ci/.docker
securityContext:
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: false
volumes:
- name: docker-auth-cfg
secret:
secretName: docker-auth
@czunker
Your file, especially the hint to https://www.testcontainers.org/features/configuration/#disabling-ryuk
, has helped a lot.
I think I got it now based on your sample. But instead of disabling Ryuk completely, the socket needs to be made available to the image that is running testcontainers.
The disadvantage of TESTCONTAINERS_RYUK_DISABLED=true
is that you cannot reuse a pod for another job, as the containers spanwed within testcontainers are still running. This will only work if the pod is thrown awaway each time.
If your environment already implements automatic cleanup of containers after the execution, but does not allow starting privileged containers, you can turn off the Ryuk container by setting TESTCONTAINERS_RYUK_DISABLED environment variable to true.
In our setup, we want to re-use pods (with a limit of idling X minutes), since all builds download a bunch of stuff via maven.
By doing top
within the "dind" container, I found this:
Mem: 7732036K used, 8666016K free, 1508K shrd, 863852K buff, 5010748K cached
CPU: 0% usr 0% sys 0% nic 100% idle 0% io 0% irq 0% sirq
Load average: 1.43 1.42 0.90 3/730 2782
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
88 85 rootless S 1535m 9% 0 0% dockerd --host=unix:///run/user/1000/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem
96 88 rootless S 1297m 8% 1 0% containerd --config /run/user/1000/docker/containerd/containerd.toml --log-level info
59 1 rootless S 695m 4% 1 0% /proc/self/exe --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run -p 0.0.0.0:2376:2376/tcp docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --ho
1 0 rootless S 694m 4% 1 0% rootlesskit --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run -p 0.0.0.0:2376:2376/tcp docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --host=
68 1 rootless S 127m 1% 1 0% vpnkit --ethernet /tmp/rootlesskit148726098/vpnkit-ethernet.sock --mtu 1500 --host-ip 0.0.0.0
2776 0 rootless S 1660 0% 1 0% /bin/sh
2782 2776 rootless R 1588 0% 0 0% top
85 59 rootless S 992 0% 1 0% docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem
Which shows us a socket being located at /run/user/1000/docker.sock
Which means that adding
a) a empty-dir volume at /run/user/1000
creates a shared folder at that location, that "dind" can share with others
b) setting TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/run/user/1000/docker.sock
(see https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection) makes this socket available for Ryuk to manage containers.
Also, in my environment the "just disable TLS for docker" was not valid as I already dared to ask to run privilidged containers. With doing following, TLS can be used without issues:
a) shared empty-dir volume at /certs
, as the dind-rootless will write /certs/*
, especially the client relevant certs at /certs/client/*
b) DOCKER_HOST=tcp://localhost:2376
(for the client) as the TLS encrpyted port is at 2376 not 2375
c) DOCKER_TLS_VERIFY=1
(for the client) as otherwise certs are not used when talking to the daemon
d) DOCKER_CERT_PATH=/certs/client
(for the client) as otherwise the certs are looked up somewhere in userhome
I'm currently trying to hide the static com.github.dockerjava
dependency behind a custom facade. Then it would be possible to implement modular container providers (e.g. Kubernetes, containerd, ...), which could even be maintained by independent projects.
Would that be a feasible approach?
I would not expect that you find a serviceable and working abstraction based on the dockerjava
abstractions (which are ultimately the Docker API), that transparently works for all the possible providers for all features that are provided by Testcontainers.
However, feel free to explore the approach further and share your findings 🙂
@kiview Yeah, this would definitly be an issue. But I think it would be acceptable to let Testcontainers itself define the set of required and therefore supported container functionalities.
I'm currently trying to hide the static
com.github.dockerjava
dependency behind a custom facade. Then it would be possible to implement modular container providers (e.g. Kubernetes, containerd, ...), which could even be maintained by independent projects.
I have done something similar, only one layer down and implementing the docker api instead (kubedock). I got reasonable results, and solved a few of the challenges you will encounter. Maybe this is helpful for your project too :-)
I was able to make some progess. Event though I wasn't able to implement/test every feature provided by testcontainers, most of the basic functionalities and tests are working right now (see gif below).
Some findings
ExposedPort
and some other network related functionalities are currently depending on a K8S NodePort service, created for each container. But one could implement different "exposition strategies" utilizing techniques like kubectl port-forward
or other service types like LoadBalancer, etc172.17.0.2
).Overall it seems like this could be a viable approach.
Not (yet) fully tested/implemented:
In case someone want's to try it out - I'm happy to get some feedback: https://github.com/masinger/testcontainers-java/blob/master/docs/features/kubernetes.md
Also, consider asking your sys admin (or any other person responsible for your Kubernetes installation) whether they would be okay giving access to k8s API from inside the CI tasks, he-he :)
Actually that wouldn't be a problem. CI tasks need to spin up pods an even setup storage (as Tekton does) anyway.
@masinger This seems like an awesome direction. What's the status of this?
@bsideup NOTE - until there's k8s support, this project is pretty unusable for MOST of the Java developers, since they work with k8s.
@kiview What's your opinion regarding the awesome project of @masinger, and what's the main direction in which Testcontainers is heading regarding this issue?
This super important and anticipated issue has been inactive for some months now. Were there been some decisions/progress on the subject?
@lynch19 this issue has been here for 4 years. If you scroll back into the comments you will clearly see they don't have any plans fixing it. The main argument is that the core team doesn't want to do the necessary abstraction to have different providers (i e. K8s, docker, podman, etc) due to some challenges and incompatibilities :)
It is highly unlikely this will change any time soon. And this is the reason why our team is not using this as well.
@lynch19 The project/fork by @masinger is public and open source, just try it out and see if it fits your need.
While I understand that this can seem to be a very important issue for individuals, we don't see this as important for the Testcontainers community as a whole, given current priorities and project focus. We thank everyone for sharing their feedback and suggestions and there is a possibility we will explore further abstractions over Docker in the future. However, as of today, we can't any more concrete info.
I will close this issue for now to communicate our current intent, but that does not mean it won't get revisited in the future.
Sorry I jump in this late, we've just started using Testcontainers. My first thing to say about the project is that you're doing a really good job, It is nice and useful 👏.
Right now we use GitLab CI and Testcontainers is running well. However, all our deployment runs on k8s and at some point in the future we would like to move to k8s runners. So, knowing you don't plan integrating with it makes me a bit uneasy.
Quoting @kiview (in an old comment, sorry):
However, the maximum size of those tests and systems should be as such, that they are still runnable on a local developer machine
This makes me understand that you see k8s as something not used locally. And nowadays you're mostly right. However, there are two strong reasons why this is changing pretty fast:
In my opinion the second point is very important. One of the reasons I see Docker was pretty benefitial was to prevent the "works on my machine". However, if companies move to k8s (as said, this is happening quickly) and developers keep working on Docker we could experience again the "works on my machine". In my understanding, that's why we start seeing many alternatives to Docker Desktop based on k8s and even Docker Desktop supports having a local k8s cluster. And this is why I'm advocating to switch to k8s locally. You get two benefits from this, prevent the "works on my machine" and improve developer familiarity with k8s. My perception is that this trend will increase.
Take this insights just as added reasons to consider supporting k8s.
Thanks.
So you can use this with gitlab ci k8s runners now using dind rootless on GKE if you use the containerd Ubuntu node images.
Config I use: http://pastie.org/p/2yokK0akSbbjDOsrevOo7r
Mounts are important for overlay2 to work same with Ubuntu images because they have a kernel patch for overlay2 to work. Without overlay2 you end up with vfs and it is basically unusably slow. Either way native k8s support would be nice.
Hey @jaybi4, thanks for sharing your view.
I don't see real issues with the plans and considerations you have outlined here. You can use k8s as the executor for you GitlabCI and still use Testcontainers, since Testcontainers works equally well with a remote Docker daemon. And there are many ways to provide a remote Docker daemon for your Testcontainers workloads (you might also want to check out https://www.testcontainers.cloud/, which greatly mitigates any kind of "works on my machine issues").
In addition, the alternatives for Docker Desktop we see emerging tend to provide Docker compatible REST-APIs (Colima, Rancher Desktop, Podman Desktop). While they don't provide a 100% compatible API in all cases, they are doing a good job improving in the recent pasts. That's also the reason why Testcontainers works with those Docker Desktop alternatives if configured correctly.
Many companies moving to k8s for deployment
I think this is not a factor that influences which API to use for creating and instrumenting ephemeral integration testing environments. We also see users switching to Cloud IDEs (such as Codespaces), how would they work with Testcontainers if Testcontainers uses k8s as its container runtime environment? How would CIs such as GitHub Actions work? In my opinion, this would simply create an even bigger external dependency (having k8s available instead of having a Docker daemon available). Or we would need to develop another abstraction layer, which allows either Docker or k8s as the container runtime. And while this might be theoretically possible, there are big risks of an impedance mismatch in concepts between Docker and k8s and it would be a considerable engineering effort.
I'd also like to point out, that Testcontainers supports testing of k8s components, through our k3s module or https://github.com/dajudge/kindcontainer.
So when we talk about k8s support, it is often difficult to understand what is meant with k8s support, depending on the context.
Thanks a lot @sharkymcdongles , I'll keep this for the future me😁
@kiview thanks for your complete answer. Maybe the fact that I haven't tried yet to integrate Testcontainers and k8s is what makes my issues not real. In that case, thanks again for the clarification. I'll forward any issues/concerns I face when I'll integrate Testcontainers and k8s.
Hey @jaybi4, thanks for sharing your view.
I don't see real issues with the plans and considerations you have outlined here. You can use k8s as the executor for you GitlabCI and still use Testcontainers, since Testcontainers works equally well with a remote Docker daemon. And there are many ways to provide a remote Docker daemon for your Testcontainers workloads (you might also want to check out https://www.testcontainers.cloud/, which greatly mitigates any kind of "works on my machine issues").
In addition, the alternatives for Docker Desktop we see emerging tend to provide Docker compatible REST-APIs (Colima, Rancher Desktop, Podman Desktop). While they don't provide a 100% compatible API in all cases, they are doing a good job improving in the recent pasts. That's also the reason why Testcontainers works with those Docker Desktop alternatives if configured correctly.
Many companies moving to k8s for deployment
I think this is not a factor that influences which API to use for creating and instrumenting ephemeral integration testing environments. We also see users switching to Cloud IDEs (such as Codespaces), how would they work with Testcontainers if Testcontainers uses k8s as its container runtime environment? How would CIs such as GitHub Actions work? In my opinion, this would simply create an even bigger external dependency (having k8s available instead of having a Docker daemon available). Or we would need to develop another abstraction layer, which allows either Docker or k8s as the container runtime. And while this might be theoretically possible, there are big risks of an impedance mismatch in concepts between Docker and k8s and it would be a considerable engineering effort.
I'd also like to point out, that Testcontainers supports testing of k8s components, through our k3s module or https://github.com/dajudge/kindcontainer.
So when we talk about k8s support, it is often difficult to understand what is meant with k8s support, depending on the context.
@kiview what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon. So instead of docker compose up, you would deploy multiple containers as pods or as a single pod to k8s where the tests would run.
One option(this is just a very basic example I wrote in 1 min and not fully fleshed out and is provided as a simple example): testcontainers runs outside the cluster and uses a kubeconfig or other auth to a kubeapi for a cluster, which will then deploy all of the various containers needed to run the tests. In this instance, there is no docker daemon involved meaning it will run with any container runtime interface since kubernetes would be handling the deployment and running of the containers. You also wouldn't need socat container since calls would be able to run via the kube internal networking or via localhost if the jobs are spun up in a singular pod.
Even if kubernetes support like the above isn't something on the radar I would at least thing native support for containerd would be a nice addition since docker is losing out more and more everyday.
what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon
This is what some people mean, while others simply mean being able to run Testcontainers based tests in their k8s powered CI, the reality is much more complex 😉 (as is an implementation of k8s as the container runtime, as also others have found out in the past).
since docker is losing out more and more everyday
Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.
I hope this answer and the previous answers by me and @bsideup in this thread help to understand the view of the Testcontainers project on this topic. We don't plan to dive into more discussions around this in the short-term future.
what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon
This is what some people mean, while others simply mean being able to run Testcontainers based tests in their k8s powered CI, the reality is much more complex 😉 (as is an implementation of k8s as the container runtime, as also others have found out in the past).
since docker is losing out more and more everyday
Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.
I hope this answer and the previous answers by me and @bsideup in this thread help to understand the view of the Testcontainers project on this topic. We don't plan to dive into more discussions around this in the short-term future.
Testcontainers based tests in their k8s powered CI, the reality is much more complex
This is already achievable as I showed above and works fine. It would just be nice to not need privileged containers to do it, but this isn't a problem from testcontainers.
(as is an implementation of k8s as the container runtime, as also others have found out in the past).
You shouldn't need to do anything with the k8s container runtime to implement this. To do this you would generate objects to pass to the kubeapi. There shouldn't be any need to touch the container runtime directly. The most complicated issue would be adjusting the library to generate manifests instead of deploying directly to the docker socket because this would be fully new code and not reusable or even repurposable. I suppose you could make some shim thing to translate the docker plans into k8s manifests to make it easier and then it is just spec transformation instead of actual logic. Then the way to verify and fetch logs and metrics would also need it's own adjustment.
But yes it is a larger effort than a quick one.
Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.
It isn't about operations so much as it is about the ecosystem around containers in general. Many Linux distros are phasing out or dumping docker completely e.g. Fedora, RHEL, and CentOS: https://access.redhat.com/solutions/3696691
When you install "docker" on newer versions, you actually get podman with an aliased wrapper to mimic docker instead. Ubuntu seems to be following suit as well. Kubernetes also killed docker completely and now uses containerd or crio. In general, we will see this trend continue especially now that Kubernetes and GCP are pushing non-docker setups heavily.
Another reason why switching makes sense is performance. Docker is more of a shim/api for talking to containerd and then allowing containerd to create cgroups and processes. With podman and more current implementations, this entire shim layer is removed allowing for direct communication with containerd meaning pulls are quicker, containers perform better and containers boot up faster. By some metrics, you can see 30% compute performance increases. I can try and dig some up after the holiday if you want because I am on mobile right now and heading out for the weekend.
@kiview
@sharkymcdongles
Config I use: http://pastie.org/p/2yokK0akSbbjDOsrevOo7r
The link doesn't work. Can you please repost it as code in github? It'd be a great resource for someone like me moving to gitlab
@sharkymcdongles
Config I use: http://pastie.org/p/2yokK0akSbbjDOsrevOo7r
The link doesn't work. Can you please repost it as code in github? It'd be a great resource for someone like me moving to gitlab
Sorry, I thought I put it to never expire. I added it there because for some reason code formatting wasn't working on GitHub. It may still not work, but are my helm values for the gitlab-runner helm chart:
checkInterval: 10
concurrent: 30
fullnameOverride: gitlab-runner
gitlabUrl: https://git.x.com
image:
image: library/gitlab-runner
registry: X
imagePullPolicy: IfNotPresent
logFormat: json
logLevel: error
metrics:
enabled: true
podSecurityContext:
fsGroup: 65533
runAsUser: 100
probeTimeoutSeconds: 5
rbac:
clusterWideAccess: false
create: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
serviceAccountName: gitlab-runner
resources:
limits:
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
runnerRegistrationToken: RUNNER_TOKEN
runners:
config: |-
[[runners]]
name = "infrastructure"
output_limit = 20480
request_concurrency = 30
environment = ["FF_USE_FASTZIP=true"]
builds_dir = "/builds"
[runners.cache]
Type = "gcs"
Path = "cache"
Shared = true
[runners.cache.gcs]
BucketName = "gitlabbucketxxx69"
[runners.custom_build_dir]
enabled = true
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
namespace = "gitlab-runner"
namespace_overwrite_allowed = ""
privileged = true
cpu_request = "500m"
memory_limit = "4Gi"
memory_request = "4Gi"
memory_limit_overwrite_max_allowed = "24Gi"
memory_request_overwrite_max_allowed = "24Gi"
service_cpu_request = "100m"
service_memory_limit = "8Gi"
service_memory_request = "8Gi"
service_memory_limit_overwrite_max_allowed = "12Gi"
service_memory_request_overwrite_max_allowed = "12Gi"
helper_cpu_request = "250m"
helper_memory_limit = "2Gi"
helper_memory_request = "256Mi"
helper_memory_limit_overwrite_max_allowed = "4Gi"
helper_memory_request_overwrite_max_allowed = "4Gi"
image_pull_secrets = ["secret"]
poll_timeout = 900
pull_policy = "if-not-present"
service_account = "gitlab-runner"
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.node_selector]
runner = "true"
[runners.kubernetes.affinity]
[runners.kubernetes.affinity.pod_anti_affinity]
[[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution]]
topology_key = "kubernetes.io/hostname"
[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector]
[[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector.match_expressions]]
key = "job_name"
operator = "In"
values = ["build","release"]
[runners.kubernetes.pod_annotations]
"cluster-autoscaler.kubernetes.io/safe-to-evict" = "false"
[runners.kubernetes.pod_labels]
"job_id" = "${CI_JOB_ID}"
"job_name" = "${CI_JOB_NAME}"
"ci_commit_sha" = "${CI_COMMIT_SHA}"
"ci_project_path" = "${CI_PROJECT_PATH}"
[runners.kubernetes.pod_security_context]
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.empty_dir]]
name = "build-folder"
mount_path = "/builds"
medium = "Memory"
[[runners.kubernetes.volumes.empty_dir]]
name = "buildah-containers"
mount_path = "/var/lib/containers"
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
[[runners.kubernetes.volumes.empty_dir]]
name = "docker"
mount_path = "/var/lib/docker"
[runners.kubernetes.dns_config]
executor: kubernetes
locked: false
name: infrastructure
protected: false
runUntagged: true
tags: infrastructure
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: true
terminationGracePeriodSeconds: 300
unregisterRunner: true
The mounts are integral to this working because of overlay2. You also need to ensure you use ubuntu containerd nodes and the ubuntu image for the runner image due to the kernel patch for overlay2.
Then you can just run jobs as normal by setting the service image to dind-rootless instead of dind e.g.
stages:
- build
before_script:
- docker info
build:
stage: build
image: docker:20.10.5
services:
- docker:20.10.5-dind-rootless
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
script:
- docker ps
Anyone tried https://github.com/joyrex2001/kubedock
Anyone tried https://github.com/joyrex2001/kubedock
Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.
Anyone tried https://github.com/joyrex2001/kubedock
Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.
I use it also for a single project, no issue.
Anyone tried https://github.com/joyrex2001/kubedock
Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.
I use it also for a single project, no issue.
No issues with testcontainer java and node. But not able to make it work for .NET. Looks like kubedock doesn't create bind sidecar to mount the socket for ryuk reapper. But only happen for dotnet container.
If anyone is facing the same... I'm not sure where the problem is
Hi,
Thanks a lot. The issues #700 and #1135 helped me, to get testcontainers running in Jenkins with
dind-rootless
inside Kubernetes. In hope, this might help someone else, a stripped down example for a Jenkins build job pod definition:apiVersion: v1 kind: Pod spec: securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: dind image: docker:20.10-dind-rootless imagePullPolicy: Always env: - name: DOCKER_TLS_CERTDIR value: "" securityContext: # still needed: https://docs.docker.com/engine/security/rootless/#rootless-docker-in-docker privileged: true readOnlyRootFilesystem: false - name: gradle image: openjdk:11 imagePullPolicy: Always env: - name: DOCKER_HOST value: tcp://localhost:2375 # needed to get it working with dind-rootless # more details: https://www.testcontainers.org/features/configuration/#disabling-ryuk - name: TESTCONTAINERS_RYUK_DISABLED value: "true" command: - cat tty: true volumeMounts: - name: docker-auth-cfg mountPath: /home/ci/.docker securityContext: capabilities: drop: - ALL allowPrivilegeEscalation: false privileged: false readOnlyRootFilesystem: false volumes: - name: docker-auth-cfg secret: secretName: docker-auth
This solved my problem of running testcontainer for the test (in maven) in EKS 1.24
Hey @jaybi4, thanks for sharing your view.
I don't see real issues with the plans and considerations you have outlined here. You can use k8s as the executor for you GitlabCI and still use Testcontainers, since Testcontainers works equally well with a remote Docker daemon. And there are many ways to provide a remote Docker daemon for your Testcontainers workloads (you might also want to check out https://www.testcontainers.cloud/, which greatly mitigates any kind of "works on my machine issues").
Since remote docker daemons are supported, it would also be possible to run a virtual machine with a docker setup inside a kubernetes pod using kubevirt (kubevirt.io). I have used kubevirt in the past and that works quite well and robust. The VM could be a simple debian VM with a docker daemon providing remote access. Then create your test pod as usual with, for jenkins for example, an agent container, other containers you need, and kubevirt. Then all would work transparently and much better then having to host one big VM for all docker jobs. Also from the networking side it would easy since 'localhost' can still be used.
The issues with kuberntes are real, it is just more complex having to manage VMs outside of kubernetes but since you can also use kubevirt it all becomes more easily manageable from within kubernetes.
Since running a Docker daemon inside a Kubernetes cluster ranges from tricky to (in a properly secured cluster) impossible, I think the only satisfactory “Kubernetes support” would mean having TC APIs transparently switch to creating Pod
s, either directly or via an intermediate service. Whatever process directly creates the pods would need to be authorized to create, managed, and delete pods in a specific namespace, preferably one separate from that where test code runs. (Hierachical Namespace Controller would make this easier to manage and secure.) My understanding is that such a switch of backend is out of scope for the project currently.
Testcontainers depends on Docker, and raw Docker is an issue in a Kubernetes managed environment (i.e. Jenkins X). It ends up either using the /var/run/docker.sock escape hatch or dind. Both approaches have issues. Would you like to be aiming at adding native Kubernetes support? Native Kubernetes support would even make Docker optional in a Kubernetes environment.
See also: https://github.com/testcontainers/testcontainers-java/issues/449