Open abhishek-rdm opened 3 years ago
Hey @abhishek-rdm,
can you please specify how you create the container?
@kiview sure
public final class Containers {
private static final Logger LOGGER = LoggerFactory.getLogger(Containers.class);
public static final PostgreSQLContainer POSTGRES = new PostgreSQLContainer<>()
.withPrivilegedMode(true)
.withDatabaseName("dm_db")
.withPassword("password")
.withCopyFileToContainer(MountableFile.forClasspathResource("dbscripts"), "/docker-entrypoint-initdb.d")
.withUsername("postgres");
public static final GenericContainer S3 = new GenericContainer<>("adobe/s3mock")
.withExposedPorts(9090)
.withPrivilegedMode(true)
.waitingFor(Wait.forLogMessage(".*Started S3MockApplication.*", 1))
.withEnv("initialBuckets","Bucket-1")
.withLogConsumer(new Slf4jLogConsumer(LOGGER));
public static final GenericContainer WIREMOCK = new GenericContainer<>("rodolpheche/wiremock")
.withExposedPorts(8080)
.withPrivilegedMode(true)
.waitingFor(Wait.forLogMessage(".*port.*8080.*", 1))
.withCopyFileToContainer(
MountableFile.forClasspathResource("mappings"), "/home/wiremock/mappings")
.withLogConsumer(new Slf4jLogConsumer(LOGGER));
}
}
@RunWith(CucumberWithSerenity.class)
@CucumberOptions(features = "classpath:features", tags = "@try2")
public final class RunCucumberIT {
@ClassRule
public static PostgreSQLContainer postgres = Containers.POSTGRES;
@ClassRule
public static GenericContainer wiremockContainer = Containers.WIREMOCK;
@ClassRule
public static GenericContainer s3 = Containers.S3;
private RunCucumberIT() {
}
}
Thanks. And this test works when running with local Docker?
Note that you did not share your call to Containers.POSTGRES.getMappedPort(X)
(which value for X?) and that the exception is thrown for port 8080 (which is not exposed by PostgreSQLContainer
).
Thanks. And this test works when running with local Docker? yes it runs perfectly on my local docker. Note that you did not share your call to
Containers.POSTGRES.getMappedPort(X)
(which value for X?) and that the exception is thrown for port 8080 (which is not exposed byPostgreSQLContainer
).
- Yes its a copy-paste mistake, basically its for Containers.WIREMOCK.getMappedPort(8080)
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT,
classes = {ApiServiceApplication.class},
properties = {
"logging.level.root=INFO",
}
)
public class SpringBootTestLoader {
private static final Logger LOG = LoggerFactory.getLogger(SpringBootTestLoader.class);
@DynamicPropertySource
static void registerPgProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", () -> "jdbc:postgresql://localhost:" +
Containers.POSTGRES.getMappedPort(5432) + "/dm_db");
registry.add("spring.datasource.username", () -> "XXXX");
registry.add("spring.datasource.password", () -> "XXXX");
registry.add("spring.datasource.driver-class-name", () -> "org.postgresql.Driver");
registry.add("customer.service.base.url", () ->
"http://" + Containers.WIREMOCK.getContainerIpAddress() + ":" +
Containers.WIREMOCK.getFirstMappedPort());
registry.add("integrations.datasearch.api.url", () ->
"http://" + Containers.WIREMOCK.getContainerIpAddress() +
":" + Containers.WIREMOCK.getFirstMappedPort() + "data_search/api/v2");
registry.add("integrations.datasearch.api.paidKey", () -> "eqweqwerwe");
registry.add("aws.s3.add.exclude.bucket.name", () -> "Bucket-1");
registry.add("aws.s3.add.exclude.bucket.endpoint", () -> "http://" +
Containers.S3.getContainerIpAddress() + ":" + Containers.S3.getMappedPort(9090));
);
@Before
public void loadSpring() {
LOG.info("-------------- Spring Context Initialized For Executing Cucumber Tests --------------");
}
}
Just to clarify, you aren't calling Containers.WIREMOCK.getMappedPort(8080)
but Containers.WIREMOCK.getFirstMappedPort()
.
From the last answer, it is not clear to me, if you can run the test successfully with a local Docker. Also, did this issue occur after switching to another Testcontainers version, or did it never work with your Jenkins setup?
Just to clarify, you aren't calling
Containers.WIREMOCK.getMappedPort(8080)
butContainers.WIREMOCK.getFirstMappedPort()
.
Thanks for noticing I just now changed it to Containers.WIREMOCK.getMappedPort(8080), but it still did not work.
From the last answer, it is not clear to me, if you can run the test successfully with a local Docker. Also, did this issue occur after switching to another Testcontainers version, or did it never work with your Jenkins setup?
Yes, it just works on my local machine and the test cases pass, I did not manage to run it on Jenkins setup, it always fail there. Also, I am using version 1.15.3, but switching to 1.16.0 also results in the same error on Jenkins.
Exact error when Jenkins inside the kube
with the localstack container (4566)
java.lang.IllegalArgumentException: Requested port (4566) is not mapped at org.testcontainers.containers.ContainerState.getMappedPort(ContainerState.java:153) at org.testcontainers.containers.localstack.LocalStackContainer.getEndpointOverride(LocalStackContainer.java:224) at org.testcontainers.containers.localstack.LocalStackContainer.getEndpointOverride(LocalStackContainer.java:193) at org.testcontainers.containers.localstack.LocalStackContainer.getEndpointConfiguration(LocalStackContainer.java:189)
@Shared
public static final LocalStackContainer localstack;
static {
localstack = new LocalStackContainer(localstackImage)
.withServices(LocalStackContainer.Service.SSM, LocalStackContainer.Service.S3,
LocalStackContainer.Service.SECRETSMANAGER)
.withStartupTimeout(Duration.ofSeconds(300))
.withExposedPorts(4566,4571)
.withLogConsumer(new Slf4jLogConsumer(logger))
.waitingFor(Wait.forLogMessage(".*Execution of \"start_api_services\" took.*",1));
localstack.start();
}
Hi @abhishek-rdm @bkosaraju,
Have you tried Testcontainers 1.16.0?
Hi @bsideup ,
Many thanks for your reply,
yes I tried with 1.16.0 there, I have't managed to get the tests itslef.
mentioning " no bridge network found" --> [EKS with jenkins agent runs a pod]set TESTCONTAINERS_HOST_OVERRIDE=<default route IP>
that eventually resulted to
org.testcontainers.shaded.org.awaitility.core.ConditionTimeoutException: Lambda expression in org.testcontainers.utility.ResourceReaper$2 that uses com.github.dockerjava.api.DockerClient, com.github.dockerjava.api.DockerClientjava.lang.String: expected the predicate to return <true> but it returned <false> for input of ...... within 5 seconds.
then I switch back to 1.15.3 pretty much started app able start and execute but the container which spawned doest have the ports exposed despite .withExposedPorts(4566)
option.
on the other note, I disabled the ryuk as I couldn't fine a way to enable ryuk port(8080) exposed, therwise I was getting - requested port 8080 not exposed error.
here I am pasting the docker inspect where I cant see the exposed ports.
# docker inspect hardcore_bardeen
[
{
"Id": "ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd",
"Created": "2021-08-30T01:15:55.716493061Z",
"Path": "docker-entrypoint.sh",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 5129,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-08-30T01:15:55.989406101Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c0f30cd5c7412ae318f825d11e8d7f56234da4dd22ded177af460386a8c206b3",
"ResolvConfPath": "/var/lib/docker/containers/ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd/hostname",
"HostsPath": "/var/lib/docker/containers/ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd/hosts",
"LogPath": "/var/lib/docker/containers/ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd/ff7e7954a06517d30b498bf34c654f3692d8a36a295304e4b137155dfbe26ddd-json.log",
"Name": "/hardcore_bardeen",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run/docker.sock:/var/run/docker.sock:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-file": "10",
"max-size": "10m"
}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": [],
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": true,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [
{
"Name": "memlock",
"Hard": -1,
"Soft": -1
}
],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/6a79a79ea83dd7d95756249de86eec34324336f9ffae8bc1c0952b7b4563b78d-init/diff:/var/lib/docker/overlay2/a23c623760fe4683bbfa6deb4fe54ea56b0bac97d74e1a1cc7c294eb7f86c703/diff",
"MergedDir": "/var/lib/docker/overlay2/6a79a79ea83dd7d95756249de86eec34324336f9ffae8bc1c0952b7b4563b78d/merged",
"UpperDir": "/var/lib/docker/overlay2/6a79a79ea83dd7d95756249de86eec34324336f9ffae8bc1c0952b7b4563b78d/diff",
"WorkDir": "/var/lib/docker/overlay2/6a79a79ea83dd7d95756249de86eec34324336f9ffae8bc1c0952b7b4563b78d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run/docker.sock",
"Destination": "/var/run/docker.sock",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "ff7e7954a065",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"4566/tcp": {},
"4571/tcp": {},
"8080/tcp": {}
},
@bkosaraju FYI you're getting the same result, just 1.16.0 fails faster - consider keeping it. Also, disabling Ryuk makes no sense if it fails with the same error.
Could you please share full output of docker inspect
, as it currently is lacking the actual status, with the network, and only shows that the ports were requested, but not mapped.
Hi @bsideup ,
As suggested this time I ran with 1.16.0
Caused by:
org.testcontainers.shaded.org.awaitility.core.ConditionTimeoutException: Lambda expression in org.testcontainers.utility.ResourceReaper$2 that uses com.github.dockerjava.api.DockerClient, com.github.dockerjava.api.DockerClientjava.lang.String: expected the predicate to return <true> but it returned <false> for input of <InspectContainerResponse(args=[], config=ContainerConfig(attachStderr=false, attachStdin=false, attachStdout=false, cmd=[/app], domainName=, entrypoint=null, env=[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin], exposedPorts=[8080/tcp], hostName=ce97337c1c95, image=testcontainers/ryuk:0.3.1, labels={org.testcontainers=true}, macAddress=null, networkDisabled=true, onBuild=null, stdinOpen=false, portSpecs=null, stdInOnce=false, tty=false, user=, volumes=null, workingDir=, healthCheck=null), created=2021-08-30T08:55:34.056879812Z, driver=overlay2, execDriver=null, hostConfig=HostConfig(binds=[/var/run/docker.sock:/var/run/docker.sock:rw], blkioWeight=0, blkioWeightDevice=null, blkioDeviceReadBps=null, blkioDeviceWriteBps=null, blkioDeviceReadIOps=null, blkioDeviceWriteIOps=null, memorySwappiness=null, nanoCPUs=0, capAdd=null, capDrop=null, containerIDFile=, cpuPeriod=0, cpuRealtimePeriod=0, cpuRealtimeRuntime=0, cpuShares=0, cpuQuota=0, cpusetCpus=, cpusetMems=, devices=null, deviceCgroupRules=null, deviceRequests=null, diskQuota=null, dns=null, dnsOptions=null, dnsSearch=null, extraHosts=null, groupAdd=null, ipcMode=shareable, cgroup=, links=[], logConfig=LogConfig(type=json-file, config={max-file=10, max-size=10m}), lxcConf=null, memory=0, memorySwap=0, memoryReservation=0, kernelMemory=0, networkMode=default, oomKillDisable=false, init=null, autoRemove=true, oomScoreAdj=0, portBindings={8080/tcp=[Lcom.github.dockerjava.api.model.Ports$Binding;@6b386f7b}, privileged=false, publishAllPorts=false, readonlyRootfs=false, restartPolicy=no, ulimits=[Ulimit(name=memlock, soft=-1, hard=-1)], cpuCount=0, cpuPercent=0, ioMaximumIOps=0, ioMaximumBandwidth=0, volumesFrom=null, mounts=null, pidMode=, isolation=null, securityOpts=null, storageOpt=null, cgroupParent=, volumeDriver=, shmSize=67108864, pidsLimit=null, runtime=runc, tmpFs=null, utSMode=, usernsMode=, sysctls=null, consoleSize=[0, 0]), hostnamePath=/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/hostname, hostsPath=/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/hosts, logPath=/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5-json.log, id=ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5, sizeRootFs=null, imageId=sha256:ee7515743e6fb92fd9ae2e9b8a4ad4516ed19507079788849c33025b3bfdb7a5, mountLabel=, name=/testcontainers-ryuk-7ceda26f-de52-4615-bdb4-3ef5e0f1c4cb, restartCount=0, networkSettings=NetworkSettings(bridge=, sandboxId=2e3a698d8d7e4be4f08a7ebe67afca3fe9adfe5ea5b12dfc9cb0b3dd2a5f34d2, hairpinMode=false, linkLocalIPv6Address=, linkLocalIPv6PrefixLen=0, ports={}, sandboxKey=/var/run/docker/netns/2e3a698d8d7e, secondaryIPAddresses=null, secondaryIPv6Addresses=null, endpointID=, gateway=, portMapping=null, globalIPv6Address=, globalIPv6PrefixLen=0, ipAddress=, ipPrefixLen=0, ipV6Gateway=, macAddress=, networks={bridge=ContainerNetwork(ipamConfig=null, links=[], aliases=null, networkID=, endpointId=, gateway=, ipAddress=, ipPrefixLen=0, ipV6Gateway=, globalIPv6Address=, globalIPv6PrefixLen=0, macAddress=)}), path=/app, processLabel=, resolvConfPath=/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/resolv.conf, execIds=null, state=InspectContainerResponse.ContainerState(status=running, running=true, paused=false, restarting=false, oomKilled=false, dead=false, pid=28798, exitCode=0, error=, startedAt=2021-08-30T08:55:34.559430226Z, finishedAt=0001-01-01T00:00:00Z, health=null), volumes=null, volumesRW=null, node=null, mounts=[InspectContainerResponse.Mount(name=null, source=/var/run/docker.sock, destination=/var/run/docker.sock, driver=null, mode=rw, rw=true)], graphDriver=GraphDriver(name=overlay2, data=GraphData(rootDir=null, deviceId=null, deviceName=null, deviceSize=null, dir=null)), platform=linux)> within 5 seconds.
Docker Inspect..
# docker inspect ce97337c1c95
[
{
"Id": "ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5",
"Created": "2021-08-30T08:55:34.056879812Z",
"Path": "/app",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 28798,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-08-30T08:55:34.559430226Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ee7515743e6fb92fd9ae2e9b8a4ad4516ed19507079788849c33025b3bfdb7a5",
"ResolvConfPath": "/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/hostname",
"HostsPath": "/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/hosts",
"LogPath": "/var/lib/docker/containers/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5/ce97337c1c95387e6725f9ad5f36a7142e703f31c86d3e173f83c683a756c4c5-json.log",
"Name": "/testcontainers-ryuk-7ceda26f-de52-4615-bdb4-3ef5e0f1c4cb",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run/docker.sock:/var/run/docker.sock:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-file": "10",
"max-size": "10m"
}
},
"NetworkMode": "default",
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": true,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [
{
"Name": "memlock",
"Hard": -1,
"Soft": -1
}
],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b9de3753ac65861fa6c28901c9350aa04fe738db05b48b8394b0048a588a086a-init/diff:/var/lib/docker/overlay2/c2520944f95b78a39409504c1a1ff7a077f908ed5a71b078313ea65a9cd7b290/diff:/var/lib/docker/overlay2/181ef0d267cd3c8f5dc6535bfbba98ab0848650db7315dca6e54e30aaf099775/diff:/var/lib/docker/overlay2/9959d3832d6363b449346c2fb5d2380d23e7337b163f98b12dc53b8e936ba5dc/diff",
"MergedDir": "/var/lib/docker/overlay2/b9de3753ac65861fa6c28901c9350aa04fe738db05b48b8394b0048a588a086a/merged",
"UpperDir": "/var/lib/docker/overlay2/b9de3753ac65861fa6c28901c9350aa04fe738db05b48b8394b0048a588a086a/diff",
"WorkDir": "/var/lib/docker/overlay2/b9de3753ac65861fa6c28901c9350aa04fe738db05b48b8394b0048a588a086a/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run/docker.sock",
"Destination": "/var/run/docker.sock",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "ce97337c1c95",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/app"
],
"Image": "testcontainers/ryuk:0.3.1",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"NetworkDisabled": true,
"OnBuild": null,
"Labels": {
"org.testcontainers": "true"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2e3a698d8d7e4be4f08a7ebe67afca3fe9adfe5ea5b12dfc9cb0b3dd2a5f34d2",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/2e3a698d8d7e",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "",
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]```
@bkosaraju so, it looks like your Docker isn't assigning any ports:
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": ""
}
]
},
Which sounds like a problem with Docker, not Testcontainers :)
Thanks @bsideup ,
Indeed, this pointed me right direction for investigation.
I eventually managed this working with following options.
Set the privileged mode for spawning containers with custom network as there is no bridge network available to inside EKS jenkins agent
I do not find any options to set the network for RYUK container - I would be much happy to managed localstack from resource reaper, please let me know incase if there is a way to change the network configuration fro RYUK container.
set following environment variables
TESTCONTAINERS_RYUK_DISABLED=true;
TESTCONTAINERS_HOST_OVERRIDE=172.31.0.1 #<-- to avoid bridge network not found
private static Network createInternalBridgeNetwork() {
Consumer<CreateNetworkCmd> cmdModifier = (createNetworkCmd) -> {
com.github.dockerjava.api.model.Network.Ipam.Config ipamConfig = new com.github.dockerjava.api.model.Network.Ipam.Config()
.withSubnet("172.31.0.1/24")
createNetworkCmd.withDriver("bridge")
.withIpam(new com.github.dockerjava.api.model.Network.Ipam().withConfig(ipamConfig));
};
return Network.builder()
.createNetworkCmdModifier(cmdModifier)
.build();
}
public static Network network = createInternalBridgeNetwork();
@Shared
public static final LocalStackContainer localstack;
static {
localstack = new LocalStackContainer(localstackImage)
.withServices(LocalStackContainer.Service.SSM, LocalStackContainer.Service.S3,
LocalStackContainer.Service.SECRETSMANAGER)
.withStartupTimeout(Duration.ofSeconds(300))
.withNetwork(network)
.withExposedPorts(4566)
.withPrivilegedMode(true)
.withLogConsumer(new Slf4jLogConsumer(logger))
.waitingFor(Wait.forLogMessage(".*Execution of \"start_api_services\" took.*", 1));
localstack.start();
}````
@bkosaraju see https://www.testcontainers.org/features/configuration/#customizing-ryuk-resource-reaper
Although I would still try to configure bridge network
@bkosaraju the solution works for me as well thanks a lot. @bsideup still it would be great to have an easy interface in test containers for creating a bridged network.
@abhishek-rdm creating a bridge network is a workaround, not a solution. The big question is why the default bridge network is missing.
Perhaps you could try configuring the daemon? https://docs.docker.com/network/bridge/#configure-the-default-bridge-network
Thanks @bsideup ,
Sure, certainly configuring bridge is a clean option with two caveats.
Docker mention, possible stay away from default bridge network and its not recommended for production use cases
The default bridge network is considered a legacy detail of Docker and is not recommended for production use. Configuring it is a manual operation, and it has technical shortcomings.
that being said, in case if we have an option to specify the network name instead of hardcoded name "bridge" at https://github.com/testcontainers/testcontainers-java/blob/master/core/src/main/java/org/testcontainers/dockerclient/DockerClientProviderStrategy.java#L278
it would be just a config option similar to TESTCONTAINERS_HOST_OVERRIDE, also ryuk would be great help to manage resources.
though I configured separately, I feel its not quite right to take lifecycle of container management in to my hands and implement, as we have a well organised ryuk.
All in all, at this moment TESTCONTAINERS_HOST_OVERRIDE and RYUK both exclusive to me, and the options can be configured at RYUK, not all available in terms of networks.
On the side node I try to override some of the options with custom EnvironmentAndSystemPropertyClientProviderStrategy
didn't seems to pick either.
Description: Requested port X is not mapped when running with Jenkins Kubernetes docker in docker.
Containers.POSTGRES.getMappedPort(X)
Current behavior: Requested port (X) is not mapped
Expected behavior: API should return the assigned port.
Version: 1.15.3
Logs: testcontainer_logs.txt