Open ErikEngerd opened 3 months ago
I found the answer by looking in the source code. I can get it to work by specifying hostaliases (as documented). However, I think it should also expose a service when the Hostname has been set for a container.
In addition, I see that when the hostname is set, the pod definition does not have a .spec.hostname field.
So these would be two changes:
In my specific test I am testing a rendez-vous server, where an agent connects to the rendez-vous server, then a client connects to it as well. Next the client verifies it can connect to the agent through the rendez-vous server by executing a hostname command on the agent. That should then return the configured hostname but is returns the generated pod name instead (so that is bullet item 2).
Also, I think it should be possible to eliminate the flattening of networking as described in the docs, albeit at some additional complexity. It is possible to configure a custom DNS for a pod using pod.spec.dnsConfig and dnsPolicy. I have used this in earlier projects to simulate an existing non-k8s network setup.
In this way, it would be possible for different groups of PODs to use a custom DNS server where all host aliasing would be resolved. So if test1 would spin up two containers, it would spin up 2 pods and 1 dns server, instead of defining services with the actual hostnames/hostaliases. Through pod.spec.dnsConfig the /etc/resolv.conf inside the container would make sure that the dns server container would be used. The dns server container (coredns based) does the required CNAME mappings and finally delagates to the standard DNS server in kubernetes.
The services could also get generated names like the pods (but with names known bv kubedock beforehand). In that way multiple tests can be done in parallel because they will never use conflicting service names. The hostnames and hostaliases are mapped using CNAME records in coredns to these generated service names. As an added bonus this would also support fully qualified hostnames as aliases, which docker supports out of the box, but which kubernetes does not allow.
Since creation of containers is in general dynamic with testcontainers, this setup would have to be dynamic as well but there are also solutions for that. A simple soluition is to generate a configmap with the required DNS configuration for coredns and simply update the configmap as new containers are added, followed by a a rollout restart of coredns.
That would make it really easy to run multiple jobs side by side in a single namespace. Using a single namespace makes it also possible to secure things easier using standard RBAC. Also, using network policies could provide the required isolation between different jobs. I would also be willing to help implement this (doing quite some go development right now).
I just created a pull request for supporting the Hostname as in the second comment above:
https://github.com/joyrex2001/kubedock/pull/95
This is a very simple code-level change. Tested it with my project and it works. Was wondering also if there are any tests that need updating. I see that in startContainer() in deploy.go, the pod spec is created based on a types.Container object. Perhaps it would be good to extract the function that creates the pod spec from the types.Container object in a separate function so it can be unit tested.
Also, I think it should be possible to eliminate the flattening of networking as described in the docs, albeit at some additional complexity. It is possible to configure a custom DNS for a pod using pod.spec.dnsConfig and dnsPolicy. I have used this in earlier projects to simulate an existing non-k8s network setup.
That sounds really promising!
Currently we're using Kubedock in our pipeline, mediated by private cloud Kubernetes and managed with Rancher quotas enabled. We have a custom made JUnit5 extension that works together with Testcontainers which initiates Kubedock process per job when needed as it's dreamed on https://github.com/joyrex2001/kubedock/issues/62.
But honestly, binding each job to unique namespace is a bit pain point for us due to various reasons like
And if it would be possible to run Kubedock on the same single namespace, all these problems would be solved and make the whole process super convenient.
Meanwhile, I have done some prototyping to demonstrate that running muktiple pods in the same namespace can be done while simulating a docker setup with containers connected to different networks.
It is as simple as adding annotations to pods indicating the host name(s) and the network(s) they belong to. Have a look here: https://github.com/ErikEngerd/kubedock-dns
Next step is to do some prototyping with kubedock to have it add the required annotations. That would be all that kubedock needs to do, apart from no longer creating services. This should allow multiple concurrent tests to be run in a single namespace.
I am running a simple test that spawns three containers where the second container connects to the first using a hostname. This does not work as the service is not created.
I am testing locally running
kubedock server --port-forward
. This does not work since the second container ('agent') cannot resolve the hostname of the first ('converge').If I look at the pod definition of the first contianer then it does contain a containerPort definition for port 8000 which it is supposed to use. In the definition of the first container, I am specifying a generic container request like this (it is go code):
I was expecting this to create a service for the hostname in the Hostname field. But I don't see any services with 'kubectl get svc'.
The test is working with docker. What could be the reason that no services are created?