Closed arun-gupta closed 1 year ago
@arun-gupta the docker ps
output doesn't show ports published by swarm.
ping @nathanleclaire
@arun-gupta No ports, even Swarm ports like 8080
here, will be exposed to the outside world (on the public IP address) by default. They should be available inside the cluster so if you curl <privateIP>:8080
for any of the instances you should get results back.
To expose the port to the outside world via ELB you can add a label to the service IIRC. I forget the exact syntax -- @chungers can you chime in? EDIT: I think it's done on ELB automatically
Arun, try accessing 8080
on the ELB's DNS name?
@nathanleclaire ELB's DNS name:8080
worked.
I thought of trying the <private-ip>:8080
but curl
is not installed by default, even apt-get
is not installed on the AMIs used by Docker for AWS. Any idea on how to get around that?
So is the idea is that <private-ip>:8080
will work on all nodes? And ELB's DNS name:8080
will route the request across all the nodes? Is it round robin or does the ELB know where the containers are running and redirects the request only to those host? Trying to understand how many hops before the request is actually dispatched to WildFly.
I thought of trying the
:8080 but curl is not installed by default, even apt-get is not installed on the AMIs used by Docker for AWS. Any idea on how to get around that?
You can try in a container, with --net=host
There is no --net=host
option when creating a service
There is no --net=host option when creating a service
I understand, but meant; spin up a container (not a service) on a node for investigating (instead of installing curl on the host)
@thaJeztah I'd like to stick with the organic service :)
is there a way to find out which node is serving the request? how many hops were done before the request was served?
I think @mrjana would be able to give the nitty-gritty details :smile:
I think curl <private-ip>:8080
inside a service task should still resolve fine. You may be inside of a container's network namespace but I don't think it affects reachability of the private IP addresses within VPC.
At any rate if you need communication between services in the cluster you should just have them on the same docker network -d overlay
and resolve their virtual IP via Docker DNS (there will be DNS entry for service name, and load balancing among the tasks from this entry should happen automatically). Overlays should work fine w/ Swarm mode services by default in 1.12 IIRC. --publish
is for "publishing" to the outside world. It should be very rare case indeed that you would want to hit <vpc-subnet-private-ip>:<port>
directly.
e.g.:
$ docker network create -d overlay swarmnet
7tyzqkpvs9slljwvac7p6qeyu
$ docker service create \
--name server \
--network swarmnet \
nginx
59h8ks8mdwtzz6092slmdq16p
$ docker service create \
--name curler \
--network swarmnet \
nathanleclaire/curl \
sh -c 'while true; do curl server; sleep 1; done'
756fnu2txv9et3zg8jf03sqnl
$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
59h8ks8mdwtz server 1/1 nginx
756fnu2txv9e curler 1/1 nathanleclaire/curl sh -c while true; do curl server; sleep 1; done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40c0532d59d3 nathanleclaire/curl:latest "sh -c 'while true; d" 11 seconds ago Up 10 seconds curler.1.2m1d0p1cftjm9iqh55elbwas7
8d8ed322cc81 nginx:latest "nginx -g 'daemon off" 53 seconds ago Up 52 seconds 80/tcp, 443/tcp server.1.eeqgtnjp45o2nplegn0sd23gv
$ docker logs 40c0532d59d3
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
So is the idea is that
:8080 will work on all nodes? And ELB's DNS name:8080 will route the request across all the nodes? Is it round robin or does the ELB know where the containers are running and redirects the request only to those host? Trying to understand how many hops before the request is actually dispatched to WildFly.
Yes, 8080
which is the "swarm port" in this example will be listening on all nodes. When a request lands on any of those swarm ports it will be forward to the service's virtual IP and load balanced across each service using IPVS. ELB load balances all incoming requests for the service swarm port across that swarm port on all nodes in the cluster IIRC (it seems a bit gratuitous but provides options leading in at least two promising directions, SSL termination and node-level health check).
@arun-gupta is this still an issue?
@justincormack Partly!
Here is what I did ...
Created a 8-node cluster (3 master, 5 worker) using Docker for AWS. Created a tunnel from localhost to to the cluster on AWS. docker info
shows:
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 4
Server Version: 1.12.2-rc1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 38
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host overlay
Swarm: active
NodeID: 1v0t1idguxg8gu8qa0fu66dou
Is Manager: true
ClusterID: d69umjtyv3zn09erk3fjc15ar
Managers: 3
Nodes: 8
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.34.149
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.22-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.854 GiB
Name: ip-192-168-34-149.us-west-1.compute.internal
ID: OMG7:VF2G:2UUO:UFP2:CSSZ:F2U5:KHJQ:ERX3:N6C3:JBCE:R4QJ:46CY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 48
Goroutines: 123
System Time: 2016-10-13T12:51:08.790213022Z
EventsListeners: 0
Username: arungupta
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
docker ps
shows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b1d207ac278 docker4x/l4controller-aws:aws-v1.12.2-rc1-beta6 "loadbalancer run --l" 13 minutes ago Up 13 minutes editions_controller
5977d36fe3df docker4x/shell-aws:aws-v1.12.2-rc1-beta6 "/entry.sh /usr/sbin/" 14 minutes ago Up 14 minutes 0.0.0.0:22->22/tcp jovial_nobel
b7dc9458b319 docker4x/guide-aws:aws-v1.12.2-rc1-beta6 "/entry.sh" 14 minutes ago Up 14 minutes determined_nobel
Created a service using docker service create --replicas 3 --name web -p 80:80 nginx
. Now docker ps
shows the same set of containers. This output is different from earlier versions where the containers in each service, and the exposed were shown.
DefaultDNS:80
shows the default NGINX page. <public-ip-worker>:80
shows default NGINX page. <public-ip-manager>:80
times out. Are the managers configured to be manager-only?
@arun-gupta docker ps
is not a swarm-wide command, and at no time did it show containers other than the ones on the manager node. docker service ps web
will show containers running for that service across the swarm.
Currently, the swarm is configured to run containers on both managers and workers. If you created enough replicas, eventually one would be scheduled on the manager you're on too.
It's very odd that the nginx sample page is available on the public ip of the worker. Which worker is that? The manager you're on? I'm going to try and reproduce that.
@friism IP address of all the workers was obtained from the EC2 console and then tested on a browser using <ip>:80
@arun-gupta Thanks, port 80 being open is a random holdover from previous versions - we already have a issue tracking closing it.
Let me close this ticket for now, as it looks like it went stale.
Created a Swarm cluster using Docker for AWS - 3 managers + 5 workers Deployed a service as
docker service create --replicas 3 --name web -p 8080:8080 jboss/wildfly
Expected<public-ip>:8080
to show WildFly landing page but times out.docker ps
on the master shows:This shows that port 8080 is not exposed.