Open ehazlett opened 7 years ago
I'm running Interlock 1.4 with docker 17.05.0-ce and nginx 1.11.10. Both Interlock and nginx are running on the swarm manager.
I configure my services using docker compose file version 3 and deploy them with docker stack deploy.
I successfully configured Interlock so that any container started on the manager node gets added to the nginx configuration and I'm able to access the service via the nginx gateway.
However if the container starts on one of the worker nodes Interlock does not detect it and does not create any routes for it.
Is this a know issue?
Note that when I run docker -H tcp://192.168.1.100:2375 events I only see events from manager node not the worker nodes. Expected behaviour? thanks!
Any idea when this support would be added?
I have looked at the code for the server.go file
In that file there is a poller function called runPoller. This function uses the docker SDK ContainerList function. However this function only returns containers on the current docker daemon not the containers in the cluster of swarm nodes.
containers, err := s.client.ContainerList(context.Background(), opts)
So I do not understand how Interlock can add routes for services which are deployed on other nodes. Is this functionality not supported ? Do the containers exposed via nginx (the load balancer) need to run on the same host where Interlock is running?
Using Java docker SDK I was able to list the swarm services, find the tasks for these services. Could Interlock use the tasks label instead of using the labels on docker containers like is done in the runPoller function? Would such a patch be welcomed?
Thanks
Thanks for the interest!
Yes, you could use the tasks however the user would need to create a service using the --container-label
option instead of service labels --label
. If you wanted to use the service labels, you would need to get the service info and then use that.
Another side note is that if the service is on it's own network you would need to make sure that the proxy container (nginx, etc) gets attached to the service network otherwise it won't be able to get to the backend.
I can either use the service labels or container labels. However I think I will first try with service labels, if present dig down to the task to find their specific IP, then dig down to the containers to find the ports they expose via DockerFile.
I don't see any other option to support swarm services which can be deployed on any node. I'll try doing this via the polling mechanism.
Agreed. Nginx and the services it proxies need to be in the same overlay network.
Cheers Jean-claude
On Wed, Aug 16, 2017 at 9:33 AM, Evan Hazlett notifications@github.com wrote:
Thanks for the interest!
Yes, you could use the tasks however the user would need to create a service using the --container-label option instead of service labels --label. If you wanted to use the service labels, you would need to get the service info and then use that.
Another side note is that if the service is on it's own network you would need to make sure that the proxy container (nginx, etc) gets attached to the service network otherwise it won't be able to get to the backend.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322772187, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwtfEyN0ifgopIpjaE-04HB9q9JCjks5sYu-pgaJpZM4OS_OV .
Just to chime in: not 100% sure what you mean about the polling but AFAIK the managers events are now swarm-wide so one can watch those to discover new services/tasks.
I can either use the service labels or container labels. However I think I will first try with service labels, if present dig down to the task to find their specific IP, then dig down to the containers to find the ports they expose via DockerFile.
Be careful as this will have a lot of API calls which will impact service at large scale. FYI, /services
and /tasks
returns a summary that has most of the info needed and is a lot cheaper.
Yes that's the endpoints I wanted to use /services and /tasks.
On Wed, Aug 16, 2017 at 11:43 AM, Evan Hazlett notifications@github.com wrote:
I can either use the service labels or container labels. However I think I will first try with service labels, if present dig down to the task to find their specific IP, then dig down to the containers to find the ports they expose via DockerFile.
Be careful as this will have a lot of API calls which will impact service at large scale. FYI, /services and /tasks returns a summary that has most of the info needed and is a lot cheaper.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322813667, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwnFwVY_A7F1EIPXyO4OHVgDEC6_cks5sYw47gaJpZM4OS_OV .
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne notifications@github.com wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322806075, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV .
Which docker version you running?
On Wed, 16 Aug 2017, 18:32 jcmcote, notifications@github.com wrote:
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne notifications@github.com wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224#issuecomment-322806075 , or mute the thread < https://github.com/notifications/unsubscribe-auth/AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322827752, or mute the thread https://github.com/notifications/unsubscribe-auth/AFRbq6x5PofXZlAxB2HwjhkgyE3eqqS4ks5sYxl-gaJpZM4OS_OV .
I'm running
Docker version 17.06.0-ce, build 02c1d87
When I run docker events and run this command
docker service create --detach=false --constraint 'node.role == manager' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com
I get these events
2017-08-16T17:48:30.023419627Z service create fyg40a1coqkfmgs6m94th6ncp (name=hello)
2017-08-16T17:48:32.346189802Z image pull alpine:latest@sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe (name=alpine)
2017-08-16T17:48:32.381662707Z container create c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
2017-08-16T17:48:32.394780897Z network connect 6ade162d08473c372e1833adf803f6ff43e34b4d879bf811532fed8290eb94c1 (container=c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866, name=bridge, type=bridge)
2017-08-16T17:48:32.533265365Z container start c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
However running the service on a worker
docker service create --detach=false --constraint 'node.role == worker' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com I'll only see the service events. Not the container events.
2017-08-16T17:45:36.513901298Z service remove txtsw8cd1jyqcdwb02f5ul4h8 (name=hello)
2017-08-16T17:45:50.251105519Z service create v9e8nihpeocdlcnfpyphow3ln (name=hello)
I believe that's why the current implementation of Interlock does not detect these containers and does not create routes from them unless the containers happen to run on the manager.
I would like to change that to be service based (so monitoring service events might sufficient to detect changes in the topology) so that nginx can proxy containers (tasks) started across the swarm cluster.
jc
On Wed, Aug 16, 2017 at 1:03 PM, Donal Byrne notifications@github.com wrote:
Which docker version you running?
On Wed, 16 Aug 2017, 18:32 jcmcote, notifications@github.com wrote:
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne notifications@github.com wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224# issuecomment-322806075 , or mute the thread < https://github.com/notifications/unsubscribe-auth/ AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224#issuecomment-322827752 , or mute the thread https://github.com/notifications/unsubscribe-auth/ AFRbq6x5PofXZlAxB2HwjhkgyE3eqqS4ks5sYxl-gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322836344, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwiMdWY6hdP17wEHou5jIlS0JA_SIks5sYyDcgaJpZM4OS_OV .
Sorry, my bad then, I thought that had been implemented.
On Wed, 16 Aug 2017, 19:53 jcmcote, notifications@github.com wrote:
I'm running
Docker version 17.06.0-ce, build 02c1d87
When I run docker events and run this command
docker service create --detach=false --constraint 'node.role == manager' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com
I get these events
2017-08-16T17:48:30.023419627Z service create fyg40a1coqkfmgs6m94th6ncp (name=hello)
2017-08-16T17:48:32.346189802Z image pull alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe (name=alpine)
2017-08-16T17:48:32.381662707Z container create c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
2017-08-16T17:48:32.394780897Z network connect 6ade162d08473c372e1833adf803f6ff43e34b4d879bf811532fed8290eb94c1
(container=c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866, name=bridge, type=bridge)
2017-08-16T17:48:32.533265365Z container start c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
However running the service on a worker
docker service create --detach=false --constraint 'node.role == worker' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com I'll only see the service events. Not the container events.
2017-08-16T17:45:36.513901298Z service remove txtsw8cd1jyqcdwb02f5ul4h8 (name=hello)
2017-08-16T17:45:50.251105519Z service create v9e8nihpeocdlcnfpyphow3ln (name=hello)
I believe that's why the current implementation of Interlock does not detect these containers and does not create routes from them unless the containers happen to run on the manager.
I would like to change that to be service based (so monitoring service events might sufficient to detect changes in the topology) so that nginx can proxy containers (tasks) started across the swarm cluster.
jc
On Wed, Aug 16, 2017 at 1:03 PM, Donal Byrne notifications@github.com wrote:
Which docker version you running?
On Wed, 16 Aug 2017, 18:32 jcmcote, notifications@github.com wrote:
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne < notifications@github.com> wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224# issuecomment-322806075 , or mute the thread < https://github.com/notifications/unsubscribe-auth/ AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/ehazlett/interlock/issues/224#issuecomment-322827752 , or mute the thread https://github.com/notifications/unsubscribe-auth/ AFRbq6x5PofXZlAxB2HwjhkgyE3eqqS4ks5sYxl-gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224#issuecomment-322836344 , or mute the thread < https://github.com/notifications/unsubscribe-auth/AJoEwiMdWY6hdP17wEHou5jIlS0JA_SIks5sYyDcgaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322849515, or mute the thread https://github.com/notifications/unsubscribe-auth/AFRbq6-MEAt-CDnZmPF236EONGy9u_UOks5sYyyqgaJpZM4OS_OV .
I had seen https://github.com/docker/swarmkit/issues/491. Bit unclear when they will support task events.
On Wed, 16 Aug 2017, 20:07 Donal Byrne, byrnedo@tcd.ie wrote:
Sorry, my bad then, I thought that had been implemented.
On Wed, 16 Aug 2017, 19:53 jcmcote, notifications@github.com wrote:
I'm running
Docker version 17.06.0-ce, build 02c1d87
When I run docker events and run this command
docker service create --detach=false --constraint 'node.role == manager' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com
I get these events
2017-08-16T17:48:30.023419627Z service create fyg40a1coqkfmgs6m94th6ncp (name=hello)
2017-08-16T17:48:32.346189802Z image pull alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe (name=alpine)
2017-08-16T17:48:32.381662707Z container create c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
2017-08-16T17:48:32.394780897Z network connect 6ade162d08473c372e1833adf803f6ff43e34b4d879bf811532fed8290eb94c1
(container=c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866, name=bridge, type=bridge)
2017-08-16T17:48:32.533265365Z container start c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
However running the service on a worker
docker service create --detach=false --constraint 'node.role == worker' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com I'll only see the service events. Not the container events.
2017-08-16T17:45:36.513901298Z service remove txtsw8cd1jyqcdwb02f5ul4h8 (name=hello)
2017-08-16T17:45:50.251105519Z service create v9e8nihpeocdlcnfpyphow3ln (name=hello)
I believe that's why the current implementation of Interlock does not detect these containers and does not create routes from them unless the containers happen to run on the manager.
I would like to change that to be service based (so monitoring service events might sufficient to detect changes in the topology) so that nginx can proxy containers (tasks) started across the swarm cluster.
jc
On Wed, Aug 16, 2017 at 1:03 PM, Donal Byrne notifications@github.com wrote:
Which docker version you running?
On Wed, 16 Aug 2017, 18:32 jcmcote, notifications@github.com wrote:
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne < notifications@github.com> wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224# issuecomment-322806075 , or mute the thread < https://github.com/notifications/unsubscribe-auth/ AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/ehazlett/interlock/issues/224#issuecomment-322827752 , or mute the thread https://github.com/notifications/unsubscribe-auth/ AFRbq6x5PofXZlAxB2HwjhkgyE3eqqS4ks5sYxl-gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/ehazlett/interlock/issues/224#issuecomment-322836344>, or mute the thread < https://github.com/notifications/unsubscribe-auth/AJoEwiMdWY6hdP17wEHou5jIlS0JA_SIks5sYyDcgaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322849515, or mute the thread https://github.com/notifications/unsubscribe-auth/AFRbq6-MEAt-CDnZmPF236EONGy9u_UOks5sYyyqgaJpZM4OS_OV .
Hey Donal,
Here's a summary of my investigation.
As discuss it's possible to detect changes to the service declarations (either by polling or via the events). Once a change is detected, I can iterate the list of serivces from /services endpoint and filter the ones having the interlock.hostname label. I was then going to use these service IDs to filter out the tasks. However I realized that tasks do not offer much information aside from the IP and ID of the container. The problem remains that I can't retrieve container details from a worker node.
Also if the service does not map a port then the task does not even have IP information.
So instead of using the container IP I'll use swarm's service/hostname DNS resolution. and instead of retrieving the port number from the container I will rely on the label interlock.port.
Here's what it would look like
docker service create --detach=false \ --constraint 'node.role == worker' --label "interlock.hostname=test" \ --label "interlock.domain=local" \ --label "interlock.port=4444" \ --replicas 1 \ --name hello \ alpine:latest ping docker.com
Using the information found on the service spec I can generate nginx routes like this
upstream test_local {
server hello:4444;
}
server { location / { proxy_pass http://test_local; } }
the hostname "hello" will be resolved by swarm's DNS resolution to the VIP which load balances between the containers of the service.
what do you think? Can you see any road blocks with this solution ?
Thanks Jean-Claude
On Wed, Aug 16, 2017 at 2:15 PM, Donal Byrne notifications@github.com wrote:
I had seen https://github.com/docker/swarmkit/issues/491. Bit unclear when they will support task events.
On Wed, 16 Aug 2017, 20:07 Donal Byrne, byrnedo@tcd.ie wrote:
Sorry, my bad then, I thought that had been implemented.
On Wed, 16 Aug 2017, 19:53 jcmcote, notifications@github.com wrote:
I'm running
Docker version 17.06.0-ce, build 02c1d87
When I run docker events and run this command
docker service create --detach=false --constraint 'node.role == manager' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com
I get these events
2017-08-16T17:48:30.023419627Z service create fyg40a1coqkfmgs6m94th6ncp (name=hello)
2017-08-16T17:48:32.346189802Z image pull alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe (name=alpine)
2017-08-16T17:48:32.381662707Z container create c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
2017-08-16T17:48:32.394780897Z network connect 6ade162d08473c372e1833adf803f6ff43e34b4d879bf811532fed8290eb94c1
(container=c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75 c866, name=bridge, type=bridge)
2017-08-16T17:48:32.533265365Z container start c069671ff5630f6c9ca328a5cf61d3073d11b3dd32d9215f633f49d70e75c866 ( com.docker.swarm.node.id=lrdb0krdeqxufy3qswi61chw0, com.docker.swarm.service.id=fyg40a1coqkfmgs6m94th6ncp, com.docker.swarm.service.name=hello, com.docker.swarm.task=, com.docker.swarm.task.id=p141f4f4gr233jbpaamne5tb5, com.docker.swarm.task.name=hello.1.p141f4f4gr233jbpaamne5tb5, image=alpine:latest@sha256 :1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe, name=hello.1.p141f4f4gr233jbpaamne5tb5)
However running the service on a worker
docker service create --detach=false --constraint 'node.role == worker' --label "interlock.hostname=test, interlock.domain=local" --replicas 1 --name hello alpine:latest ping docker.com I'll only see the service events. Not the container events.
2017-08-16T17:45:36.513901298Z service remove txtsw8cd1jyqcdwb02f5ul4h8 (name=hello)
2017-08-16T17:45:50.251105519Z service create v9e8nihpeocdlcnfpyphow3ln (name=hello)
I believe that's why the current implementation of Interlock does not detect these containers and does not create routes from them unless the containers happen to run on the manager.
I would like to change that to be service based (so monitoring service events might sufficient to detect changes in the topology) so that nginx can proxy containers (tasks) started across the swarm cluster.
jc
On Wed, Aug 16, 2017 at 1:03 PM, Donal Byrne notifications@github.com wrote:
Which docker version you running?
On Wed, 16 Aug 2017, 18:32 jcmcote, notifications@github.com wrote:
ok I was not aware the events now include swarm-wide events. In the version of docker I have I don't see them when I do
docker events
Is that the command to see them? I've read a few posts describing various options for the commands but I don't know which one was finally implemented..
On Wed, Aug 16, 2017 at 11:18 AM, Donal Byrne < notifications@github.com> wrote:
Just to chime in: not 100% sure what you mean about the polling but AFAIK the master events are now swarm-wide so one can watch those to discover new services/tasks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/ehazlett/interlock/issues/224# issuecomment-322806075 , or mute the thread < https://github.com/notifications/unsubscribe-auth/ AJoEwnMfPJqsEV8Ka1M1lcypLWdV0ClMks5sYwg8gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/ehazlett/interlock/issues/224#issuecomment-322827752 , or mute the thread https://github.com/notifications/unsubscribe-auth/ AFRbq6x5PofXZlAxB2HwjhkgyE3eqqS4ks5sYxl-gaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/ehazlett/interlock/issues/224#issuecomment-322836344 , or mute the thread < https://github.com/notifications/unsubscribe-auth/ AJoEwiMdWY6hdP17wEHou5jIlS0JA_SIks5sYyDcgaJpZM4OS_OV
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224# issuecomment-322849515, or mute the thread https://github.com/notifications/unsubscribe-auth/AFRbq6-MEAt- CDnZmPF236EONGy9u_UOks5sYyyqgaJpZM4OS_OV .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322855377, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwvgTvgAWc-lyJmqyaFHanmMSdAJCks5sYzGygaJpZM4OS_OV .
I think a possibly better way might be to do a dns lookup on the tasks.<service-dns-alias>
, that should give the actual underlying ips to each task. This makes more sense when using with Prometheus for instance where you want to be able to poll each individual container.
I suppose you could just get that from the tasks endpoint but the dns lookup will probably be less demanding on the docker daemon and possibly a lot faster.
Turns out I do have the IP address in the tasks. What I don't have are the port. If I declare my service like this version: '3'
networks: mynet:
services:
app: image: ehazlett/docker-demo:latest deploy: replicas: 3 networks:
Then I do see mappings in the service but those maybe only apply to the VIP. But I would actually rather not even specify the ports mapping and only rely on the interlock.port label. This works fine because the container does listen to port 8080 on the overlay network at the various container IP. Since I know the IP from the task and the port from the service label I'll be able to generate this nginx configuration
upstream ctxtest.localtest { zone ctxtest.localtest_backend 64k; server 10.0.0.3:8080; server 10.0.0.8:8080; server 10.0.0.10:8080;
}
I think its the best I can do, since I can't seem to be able to get at the container exposed port (the one EXPOSED via Dockerfile in the image)
I'm working on adding swarm services support to interlock. However I'm running in a small issue. When I use the go SDK to talk to the docker REST API to list services I get a list of service back however the ContainterSpec is missing. I tried chainging the version for the REST api from 1.26 to 1.30 but that did not help. When I look online the struct for TaskSpec found here https://godoc.org/github.com/moby/moby/api/types/swarm#TaskSpec is
type TaskSpec struct {
// ContainerSpec and PluginSpec are mutually exclusive.
// PluginSpec will only be used when the Runtime
field is set to plugin
ContainerSpec ContainerSpec
https://godoc.org/github.com/moby/moby/api/types/swarm#ContainerSpec
json:",omitempty"
PluginSpec runtime
https://godoc.org/github.com/docker/docker/api/types/swarm/runtime.PluginSpec
https://godoc.org/github.com/docker/docker/api/types/swarm/runtime#PluginSpec
json:",omitempty"
I've noticed that in the code base for interlock the struct I have does not have a pionter (the star) in front of ContainerSpec.
When I use the REST API via curl call curl --cacert $DOCKER_CERT_PATH/ca.pem --cert $DOCKER_CERT_PATH/cert.pfx --pass supersecret https://192.168.99.105:2376/services
I do see the ContainerSpec. I'm not sure where the problem is. Could it be the version of the go SDK interlock is using?
Here's the SDK call I make
serviceoptFilters := filters.NewArgs() serviceopts := types.ServiceListOptions{ Filters: serviceoptFilters, } log().Debug("getting service list") services, err := client.ServiceList(context.Background(), serviceopts)
Thanks for your help
Jean-Claude
On Thu, Aug 17, 2017 at 2:47 AM, Donal Byrne notifications@github.com wrote:
I suppose you could just get that from the tasks endpoint but the dns lookup will probably be less demanding on the docker api and possibly a lot faster.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-322983803, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwncrQ6w0fiTBRV6HMYCn-CeSdEqsks5sY-IFgaJpZM4OS_OV .
I have a working proof of concept which uses docker swarm services/tasks to configure a load balancer like nginx/haproxy. I will submit a pull request shortly.
However I've notice there's another pull request made by @ehazlett #186 which seems to also be related to creating routes based on information from swarm services and tasks. I'm a little confuse. Is that pull request still valid. It seems old..
Can you shed some light? Thanks Jean-Claude
Yes there has been quite a bit of work towards Swarm support in that branch but you are correct that it has stalled. I have been trying to work on it but there are a few other things going. Feel free to open a PR -- I would rather review early than have you write a bunch and not get merged :)
Hey Evan,
I've looked at the swarm-services branch and I see how the containers and services are passed down to the load balancers genereate.go. I think that's a bit more elegant than what I have. That is two separate functions on the load balancers to handle either containers or services.
I too had realized that the functions that resolve the values of the labels should take a label array instead of a container.
I'm thinking I should integrate my changes into the swarm-services branch. Do you think that's a good idea or should I base it from master (as I have it now..)
On Fri, Sep 1, 2017 at 1:52 PM, Evan Hazlett notifications@github.com wrote:
Yes there has been quite a bit of work towards Swarm support in that branch but you are correct that it has stalled. I have been trying to work on it but there are a few other things going. Feel free to open a PR -- I would rather review early than have you write a bunch and not get merged :)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/224#issuecomment-326644366, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoEwu42qlqeMm496qgnpKKwN9LxcAX-ks5seERkgaJpZM4OS_OV .
Hey it might be easier to discuss via chat as there are quite a few changes in that branch. Do you have a preferred chat and would you be up for a quick sync? I'm on the Docker Community Slack if you are there (ehazlett)
I notice my implementation had an issue. It cannot detect when containers are stopped on other nodes. This is currently a limitation of the docker events. From the node you are listening from say the manager node you only receive container events from that manager not the other worker nodes.
To work around this issue I'm using the poller and doing a diff of the task states which I have access to from the manager node. This way I know when a container on a worker node is down.
This all works well. I'm now testing with two separate stacks deployed to the same swarm. Ideally I'd like to be able to deploy multiple stacks representing various staging branches from our build host so these stacks should work independently.
My question to you is: Would it be a good idea to use the stack membership to detect task changes and to generate the nginx configuration.
I know in a previous comment you said to make sure container IPs are only added to the nginx configuration if the given container and nginx are part of the same network.
This will add support for Swarm services.