testcontainers / testcontainers-java

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.
https://testcontainers.org
MIT License
7.96k stars 1.64k forks source link

Accessing containers from other containers and host by using the same hostname #5094

Open OLibutzki opened 2 years ago

OLibutzki commented 2 years ago

Context

I have a Spring Boot application which uses Keycloak for it's identity management. I would like to write an UI test which interacts with the application. As the application uses Keycloak the user is redirected to Keycloak in order to login at the first request. After logging in successfully, the user is redirected to the application.

Test setup

I have two containers (Keycloak and Browser). In order to get things working the browser (which runs in a container) needs to access the application (running on the host) and Keycloak (running in a different container).

Moreover, the host needs to access Keycloak using the same hostname as the browser container does...

I tried two approaches to relaize this:

Approach 1

The Browser Container accesses Keycloak using host.testcontainers.internal. This works fine, but unfortunately the host cannot resolve host.testcontainers.internal in order to access Keycloak.

Approach 2

I put the Browser container and the Keycloak container into a dedicated network. The result is almost the same, the host is not part of this network and therefore it cannot access Keycloak.

Conclusion

The most obvious solution might be to make host.testcontainers.internal resolvable from the host. Another option is that the host can participate in a certain network. Well, but maybe there are more ways to solve this...

Btw. using host.docker.internal (on Windows) all over the place works fine as this hostname is available within containers AND on the host.

kiview commented 2 years ago

Hey @OLibutzki, unfortunately, there is only so much we can do here on the Testcontainers side to mix host and container networking.

The problem with host.docker.internal is, that this only works on Docker Desktop and therefore not on Linux. If you know that you will run your tests only with Docker Desktop, you can go ahead and use this approach.

If you would run your application as a container rather than directly on the host, you could also make use of Docker networks and therefore have a stable network alias for the application. Not sure if this works for your use case.

OLibutzki commented 2 years ago

there is only so much we can do here on the Testcontainers side to mix host and container networking.

You mean there is not much you can do?

If you know that you will run your tests only with Docker Desktop, you can go ahead and use this approach.

It has to be executed on CI (Unix) as well.

If you would run your application as a container rather than directly on the host, you could also make use of Docker networks and therefore have a stable network alias for the application. Not sure if this works for your use case.

Unfortunately, that's not an option, too, because I want to have access to the Spring beans in order to verify some behaviour like "ensure that method xy is called".

Al that being said, I'm happy that I found a solution in the meantime. I demonstrated this solution in a GitHub Actions workflow which you can find here.

I had to do two things:

  1. Add an entry to the hosts files to map host.docker.internal to 127.0.0.1. By doing this, the host can resolve host.docker.internal as you can see here.

  2. Add an extra host to the container which maps host.docker.internal to host-gateway. host-gateway has been introduced in Docker 20.03. By doing this, the container can resolve host.docker.internal.

With this setup I have been able to execute the test which you can find here: https://github.com/OLibutzki/mailsender/blob/main/src/test/java/de/libutzki/mailsender/MailSenderApplicationIntegrationTest.java

bsideup commented 2 years ago

Sorry, not sure I understand why can't you use the Network abstraction? Exposing host works fine with custom networks: https://github.com/testcontainers/testcontainers-java/blob/5e016c9432c71f263493d144214f5fb047268d84/core/src/test/java/org/testcontainers/containers/ExposedHostTest.java#L67

OLibutzki commented 2 years ago

The special requirement in this scenario is the need to access Keycloak from the user's browser and from the application's backend using the very same hostname.

You pass one Auth-Server-URL to the Spring Boot application. This URL is used to...

  1. ... request Keycloak from the application
  2. ... redirect the user to Keycloak if she doesn't have a valid token

The first request is done by the application's backend. The second request is just a redirect in the user's browser... So the browser has to be able to access the redirection target.

For both URLs the very same host is used, configured using the Auth-Server-URL.

In my scenario the application runs in the host and the browser in a container... That's the reason why both need to access Keycloak using the same host name.

Perhaps, I do miss something how this communication with Keycloak (openid-connect) works. Maybe @dasniko can verify or refute that same hostname constraint?

kiview commented 2 years ago

I understand that having the beans accessible makes writing certain tests easier, but for such setups where you have to set up networking and DNS in a very specific way, considering doing the tests in an out-of-process fashion by running the application as a container as well, is generally the approach I would probably suggest.

You say

Unfortunately, that's not an option, too, because I want to have access to the Spring beans in order to verify some behavior like "ensure that method xy is called".

but a good test of such an end-to-end flow should be able to assess behavior in a closed-box style as well, rather than looking into implementation details. But this is just a rule of thumb, I don't know your code and use case well enough to give an ultimate recommendation on this.

I think a diagram of the involved components would also help with understanding the problem better.

OLibutzki commented 2 years ago

Hi @kiview,

here is the diagram with a short explanation. testcontainers-5094

  1. The tests connects to the browser running in a container via CDP (Chrome DevTools Protocol)
  2. The browser accesses the application by calling http://myhostname:app-port. The app-port is randomly chosen by Spring Boot and exposed using Testcontainers#exposeHostPorts(int...). The browser container is initialized with accessToHost = true
  3. The application detects that the user does not have a valid access token and calls a well-known Keycloak url. Keycloak's response contains an url where the browser needs to redirect the user to... at this url the user gets a login promt in order to authenticate. How does Keycloak build the url? It takes the hostname which the Spring Boot application used for performing the request against keycloak and adds some paths to this hostname.
  4. The Spring Boot application reponds to the request in step 2 with a HTTP 302 (Redirect) and redirects the browser to the login url which has been retruned by Keycloak in step 3.
  5. The browser performs the redirect and the login screen is displayed.

My main challenge is the following: Keycloak needs to be accessible from the Spring Boot application with the very same hostname as from the browser. Why? Because the url of the login page is built using the hostname which the Spring Boot application suss for its request against Keycloak. This login url is then passed to the browser and the browser has to be able to resolve the hostname contained in this url as well.

I hope that this explanation helps.

bastoker commented 5 months ago

At work we came accross the same problem regarding consistent host names across the docker network boundary, managed by Testcontainers.

We solved it by adding a HTTP Proxy Server ot our Docker network, which we connect our browser to which is running normally, i.e. outside Docker.

image

I've tested a couple of proxy servers but Squid was the only one that performed 100% stable using this setup, on our laptops (a lot of M1's) and on CI (Azure Devops Pipelines).

The following is the code we've used as a Squid Testcontainer:

public class SquidProxyContainer<SELF extends SquidProxyContainer<SELF>> extends GenericContainer<SELF> {

    private static final DockerImageName DEFAULT_SQUID_PROXY_IMAGE_NAME = DockerImageName.parse("ubuntu/squid:latest");

    public static final Integer PROXY_PORT = 3128;

    public SquidProxyContainer() {
        super(DEFAULT_SQUID_PROXY_IMAGE_NAME);
        this.addExposedPort(PROXY_PORT);
    }

    public String getProxyUrl() {
        return "http://%s:%s".formatted(getHost(), getFirstMappedPort());
    }
}

Since the proxy server itself is running inside the docker network (network managed by Testcontainers) all hostnames like keycloak and backend resolve to the same hostnames in the browser, due to the proxy server that Chrome is instructed to use.

The only real downside is that the Unit Under Test needs to run as a testcontainer as well, just like @kiview already mentioned briefly,

If you would run your application as a container rather than directly on the host, you could also make use of Docker networks and therefore have a stable network alias for the application. Not sure if this works for your use case.

Our context is obviously that of a more integrated systems test, with the added benefit that we really know our docker images are working (as opposed to e.g. a @SpringBootTest).

marbon87 commented 4 months ago

I had the exact same problem and solved it by using the ip address of the keycloak container for communicating from spring boot and selenium with keycloak. Not that pretty but it works:

@DynamicPropertySource
public static void setProperties(DynamicPropertyRegistry registry) {
registry.add("custom.issuer-uri", () -> "http://" +  
     keycloakContainer.getContainerInfo().getNetworkSettings().getNetworks().get("bridge").getIpAddress() + ":8080/realms/test");
}
jogerj commented 2 weeks ago

I had the same issue where instead of chrome, I'm trying to run a k6 testcontainer. The solution I have is to run the container with host mode network. In this example I'm using the k6 and keycloak Testcontainers modules

class K6IntegrationTest {
  @Container
  static final K6Container k6 = new K6Container("grafana/k6:latest")
    .withTestScript(MountableFile.forClasspathResource("k6/main.js"))
    .withCopyToContainer(MountableFile.forClasspathResource("k6"), "/home/k6")
    .withNetworkMode("host");

  @Container
  static final KeycloakContainer keycloak = new KeycloakContainer("quay.io/keycloak/keycloak:latest")
    .withRealmImportFile(PATH_TO_REALM_JSON);

  static String getKeycloakAuthServerUrl() {
      return "http://localhost:" + keycloak.getMappedPort(8080);
  }

  @DynamicPropertySource
  static void registerProperties(DynamicPropertyRegistry registry) {
     registry.add("keycloak.auth-server-url", K6IntegrationTest::getKeycloakAuthServerUrl);
     registry.add(
          "spring.security.oauth2.resourceserver.jwt.issuer-uri",
          () -> getKeycloakAuthServerUrl() + "/realms/foo_realm");
  }
}

so now when k6 calls http://localhost:12345 it will resolve to keycloak correctly to get the token, the JWT token issuer-uri will be of the same origin. Then when Spring Boot receives an HTTP request, the uri from the Authorization Bearer in the request header will also be valid and resolve to keycloak.

At the moment

The host networking driver only works on Linux hosts, but is available as a Beta feature, on Docker Desktop version 4.29 and later.

So on Windows with WSL, you need to enable this feature flag.