gliderlabs / registrator

Service registry bridge for Docker with pluggable adapters
http://gliderlabs.com/registrator
MIT License
4.66k stars 912 forks source link

Docker instances without ports #38

Open cultureulterior opened 10 years ago

cultureulterior commented 10 years ago

I believe that being able to register docker instances without exported ports (suitably tagged) should be one of the things that registrator supports.

Use cases for such dockers would be portless docker instances that merely get data (web spiders, webhook senders, queue pullers and pushers of all types). You'd want to know that they exist even though they don't have exposed external ports.

This would allow for monitoring and configuration management, etc.

It should also be noted that consul (at least), supports portless services.

sheldonh commented 10 years ago

I don't understand the value of service discovery for things that aren't services. Could you clarify a use case?

As an aside, pull request #18 allows you to register services that don't publish ports, although it does more than that, and you may not want the whole deal.

cultureulterior commented 10 years ago

You want to do discovery for monitoring and other operational purposes. Say all docker containers, even the spiders, should be monitored- you want to pull the canonical service information- the list of containers from somewhere.

Or say you want to enable nsenter access to all docker containers- I'm working on something like that here: https://github.com/firstbanco/nansen

At any rate, there likely will be a pull request with this feature in a few days, because I need it.

progrium commented 10 years ago

How would you properly health check a process without an exposed service? You can already hit Docker on all hosts to see all containers, that sounds more like what you want.

Supporting non-exposed services really messes with the semantics here. When using etcd kv registry ... what is it even going to register?

grove commented 10 years ago

Idea: use "docker exec" to run health checks for containers without exposed services.

etopian commented 9 years ago

I have a service which is available to other services on an internal network. For instance a MySQL instance which is available to other containers and is referenced using a domain name via SkyDNS, like mysql.skydns.local. I don't necessarily want to publish the port on the server and expose that to the world. It's fine for this service to be available only on the local machine. For this I need to be able to publish the IP via DNS so that other services on the local machine can use the service.

sheldonh commented 9 years ago

For local-only access, you don't need SkyDNS. You can just use container links.

For remote access, either you expose the port or you don't; DNS isn't a good fit for restricting world access. Rather use a firewall for that.

If this advice is missing the point, could you clarify the steps you'd like to take, and what results you'd like to see?

On Thu, Feb 19, 2015 at 7:59 AM, etopian notifications@github.com wrote:

I have a service which is available to other services on an internal network. For instance a MySQL instance which is available to other containers and is referenced using a domain name via SkyDNS, like mysql.skydns.local. I don't necessarily want to publish the port on the server and expose that to the world. It's fine for this service to be available only on the local machine. For this I need to be able to publish the IP via DNS so that other services on the local machine can use the service.

— Reply to this email directly or view it on GitHub https://github.com/gliderlabs/registrator/issues/38#issuecomment-75003116 .

etopian commented 9 years ago

Just because the containers are linked it does not mean that they do not still need DNS. Linking only works in one direction, in that it publishes the hostname for the service to only one of the containers via the hosts file, it does not publish the ports or any other useful information. It is much nicer in terms of dynamic configurations to have internal dns that resolves a service IP via dns. This way you can always keep adding new containers without having to deal with hosts file. It would also be nice if there was a way to configure what was published on the start event, for instance via the TXT record so that this information could be queried later by a container which needed it.

sheldonh commented 9 years ago

Okay. So you expose the port and publish the service via DNS. Now introduce a firewall to limit access to the published service, and you have what you want.

I'm not sure how this relates to the original issue of wanting to register containers that don't offer any services?

On Thu, Feb 19, 2015 at 9:36 AM, etopian notifications@github.com wrote:

Just because the containers are linked it does not mean that they do not still need DNS. Linking only works in one direction, in that it publishes the hostname for the service to only one of the containers. It is much nicer to have internal dns that resolves a service IP via dns. This way you can always keep adding new containers without having to deal with hosts file.

— Reply to this email directly or view it on GitHub https://github.com/gliderlabs/registrator/issues/38#issuecomment-75009735 .

etopian commented 9 years ago

Well there are a few problems:

  1. skydns is not support via -internal flag.
  2. each backend listens on the same port. therefore the port mapping would be dynamic which would then throw off the proxy server's configuration.
  3. totally unnecessary to publish it to the host.

I am just going to write a simple python daemon to do the same thing this thing does but in a more a more customizable way so that you can use the start event to publish whatever about the container you want via SkyDNS using TXT, SRV, or IP.

This thing seems a bit too rigid to allow me to do easily do what I want to do.

sheldonh commented 9 years ago

SkyDNS2 should be supported by the -internal flag; the SkyDNS2 support was developed with the -internal flag in mind. Could you illustrate steps to demonstrate how it isn't working?

etopian commented 9 years ago

Your documentation says otherwise, I was going by that. But you are right -internal does work fine for SkyDNS2 and so you should update your documentation. But yes I would support the idea of being able to even publish an IP without having to publish a port. Becuase I could work around with that for my purposes as I know the port beforehand and being able to just get the IP is enough for the proxy server to refer to the backend server.

etopian commented 9 years ago

Also as far as protecting via firewall, this is also not possible now that I come to think of it because all the backend servers need to listen on the same port. So when you start them without specifying a port on the host they are dynamically assigned so it's difficult to know how to configure your firewall to properly block them. There is no knowing how many of these backends you are actually going to end up starting.

sheldonh commented 9 years ago

Ah, a doc bug. Could you refer me to the documentation that says the skydns2 backend doesn't work with -internal? I'd certainly like to fix that.

As for publishing IPs without ports, I can't see the value yet, so I'm not motivated to contribute support for it.

As for firewalling, I assumed your clients would be inside the firewall, sorry. Access control for ephemeral ports on a single server IP address is a hard problem to solve. Most of the people I've seen solving it, having been using VPNs.

etopian commented 9 years ago

"argument -internal is passed, registrator will register the docker0 internal ip and port instead of the host mapped ones. (etcd and consul only for now)":

On: https://github.com/gliderlabs/registrator/

etopian commented 9 years ago

My proposal for dealing with this issue is this, I may implement it for you with abit of guidance. If a service does not expose a port on the host but still has one or more ports, then publish the service via dns using the published port on the container only if the -internal flag is set. Is this acceptable? Can this be merged into head?

progrium commented 9 years ago

Expose and publish are two different things, I think you mean publish. Right now when you use -internal it can create services for exposed ports. Is that what you want?

etopian commented 9 years ago

Currently if a port is not exposed, registrator ignores it in the bridge giving a message like "ignored 9497af912b87 port 9800 not published on host". I want to change it so that if a service exports no ports, but still publishes one or more ports internally, then it no longer gets ignored and is published via DNS.

progrium commented 9 years ago

We need to use more specific terminology because otherwise it is confusing. "Publishes one or more ports internally" by that you mean exposing ports? In other words, using EXPOSE in Dockerfile? Also, just to be clear, Registrator does not publish to DNS, it hands off to a registry which may publish to DNS.

etopian commented 9 years ago

Sorry.

The ports are exported via EXPOSE in Dockerfile and then published via the -p flag on docker run.

And Registrator does not publish to DNS, it hands it off to SkyDNS2 which is what I want to do.

So if they are not published, Registrator currently ignores them.

I want it so that if the internal flag is set, and no ports are publisehd via the -p flag, but still exported via EXPOSE, then Registrator no longer ignores them... and uses the exported port and sends that off to the registry.

progrium commented 9 years ago

Right, if you expose with EXPOSE and use -internal I'm pretty sure it will make services for them. If I'm wrong then it will take more thought to figure out the proper semantics because an implementation would be easy.

etopian commented 9 years ago

No it does not do this. Currently if no ports are published then you get an error message that says something like:

ignored 9497af912b87 port 9222 not published on host

progrium commented 9 years ago

Which version?

etopian commented 9 years ago

I am running the Docker image from last night... the latest one.

progrium commented 9 years ago

Can you give me the full image name and tag you are using.

etopian commented 9 years ago

https://github.com/gliderlabs/registrator/blob/master/bridge/bridge.go line 159 seems to be where it prints an error to the log and tells the loop to continue to the next item.

progrium commented 9 years ago

And a few lines above it you see that's in an if statement where it only happens if running with -internal ... so either you are not using -internal correctly or using a different version. It would help if you told me the version.

etopian commented 9 years ago

I just did this docker pull gliderlabs/registrator:latest and so that's what it is. not sure how to get you more information about which version I am running. I just ran this command just now and it says it's up to date.

etopian commented 9 years ago

Hold on, perhaps you are right. Restarting the container.

progrium commented 9 years ago

It tells you the version when you start it. But also keep in mind that :latest is the latest release, not what's in master ... even though we made a release 22 hours ago, there has been lots of changes in master since.

Some people add -internal after the registry URL in arguments, but it needs to go first. The next thing I'd ask if you're still seeing this problem is to show me how you are running the container.

In the future it might be better to jump into IRC on Freenode #gliderlabs to troubleshoot.

etopian commented 9 years ago

Okay it works. I was running v5 and my problem with the -internal argument was not first.

Thank you very much for letting me know that, could not have figured that out on my own.

progrium commented 9 years ago

Glad you figured it out. Now let's keep this thread focused on the topic, which is creating services for containers that do not publish or expose any ports. I want to keep talking about it since I've not decided.

pikeas commented 9 years ago

+1 for port-less services. A service is anything your application expects to be running, whether or not you need to communicate with it directly.

starkovv commented 9 years ago

+1

mgood commented 9 years ago

@starkovv can you provide some details on how you would like to use this information? I think part of the hold-up is that we don't have a clear understanding of the when and how people would use the service registry with processes that don't expose a networked service. Providing more concrete examples would help determine how best to handle those situations.

starkovv commented 9 years ago

@mgood this is not exactly what you asked, but it gives some potentially useful insights.

Here is my install: Host http://cl.ly/image/0m0y0V3d1w0O Cluster (L2) http://cl.ly/image/3H3G2G0O1B36

I think that the Docker`s model of using single IP per all containers on given host and mapping NATed ports on the host to different containers in indirect manner is NOT the best architecture choice.

As my opinion containers must be considered like VMs, in the meaning that there should be accessible all the ports to each container.

For instance say you have VoIP PBX running inside the container. Typically there needs the following UDP ports to be accessible from outside: 5060, 4000 – 6000, 20000 – 40000. And it becomes nightmare if you need to deploy cluster of such systems on single host.

starkovv commented 9 years ago

@mgood I think I managed to summarize what I would like to get as a result:

I'd like to have all running containers to be registered in Consul, and for each of them I'd like to see IP addresses of all interfaces (except localhost, e.g. eth0, eth1, eth2, etc.).

Optionally it would be nice to see exposed ports (if any).

cultureulterior commented 9 years ago

https://github.com/gliderlabs/registrator/pull/186

vitalyisaev2 commented 9 years ago

+1 for this feature

I need to register the containers that interacting inside host's virtual network.

yeasy commented 9 years ago

+1, at least we should provide the option to enable/disable it.

progrium commented 9 years ago

So it's not really about "portless servies" ... it's about just registering the container's internal IP. That helps.

On Wed, Aug 5, 2015 at 8:38 PM, yeasy notifications@github.com wrote:

+1, at least we should provide the option to enable/disable it.

— Reply to this email directly or view it on GitHub https://github.com/gliderlabs/registrator/issues/38#issuecomment-128205828 .

Jeff Lindsay http://progrium.com

manics commented 9 years ago

I ran into this a few months ago and made a workaround: https://github.com/manics/registrator/commit/2c310f2814ec22cf7ce6f77e2dca1696d60fbefa (my first ever go-lang commit).

Use case:

We're experimenting with using docker as an alternative to spinning up VMs for developers, which means we're pretty much ignoring most Docker best-practices. The Docker bridge is setup on a LAN-accessible IP range:

$ cat /etc/sysconfig/docker-network
DOCKER_NETWORK_OPTIONS="-b br0 --fixed-cidr 10.0.0.64/27"
$ cat /etc/sysconfig/network-scripts/ifcfg-br0
...
DEVICE=br0
IPADDR=10.0.0.200
NETMASK=255.255.255.0
TYPE=bridge
GATEWAY=10.0.0.254
...

The idea is if someone wants a machine for testing a custom/weird/new server config, or playing with new dependencies, installing new applications, etc. instead of starting up a full blown VM they can play in a temporary docker container with admin access. It's also useful for testing other Docker images. I've setup registrator/etcd/skydns2 so that it's easy to give the containers a DNS name.

Skydns2 is the backend, and it's setup as an authoritative server for an internal subdomain (our main DNS servers delegate to it).

Hopefully this is useful info!

sheldonh commented 9 years ago

Could you make it interact properly with -internal, registering ExposedIP when b.config.Internal and HostIP otherwise?

starkovv commented 9 years ago

There is also sometimes a requirement to have 2 or more interfaces connected to different networks.

On Thursday, August 6, 2015, Simon Li notifications@github.com wrote:

I ran into this a few months ago and made a workaround: manics@2c310f2 https://github.com/manics/registrator/commit/2c310f2814ec22cf7ce6f77e2dca1696d60fbefa (my first ever go-lang commit..) Use case:

We're experimenting with using docker as an alternative to spinning up VMs for developers, which means we're pretty much ignoring most Docker best-practices. The Docker bridge is setup on a LAN-accessible IP range:

$ cat /etc/sysconfig/docker-network DOCKER_NETWORK_OPTIONS="-b br0 --fixed-cidr 10.0.0.64/27"

The idea is if someone wants a machine for testing a custom/weird/new server config, or playing with new dependencies, installing new applications, etc. instead of starting up a full blown VM they can play in a temporary docker container with admin access. It's also useful for testing other Docker images. I've setup registrator/etcd/skydns2 so that it's easy to give the containers a DNS name.

Skydns2 is the backend, and it's setup as an authoritative server for an internal subdomain (our main DNS servers delgate to it).

Hopefully this is useful info!

— Reply to this email directly or view it on GitHub https://github.com/gliderlabs/registrator/issues/38#issuecomment-128319766 .

С уважением, Владислав Старков Сотовый: +7 (903) 960-50-75

manics commented 9 years ago

@sheldonh Can do, not straight-away though as I'm busy with other work at the moment.

sheldonh commented 9 years ago

@manics Given how long the issue has existed, I don't think anyone can complain. :-)

cultureulterior commented 9 years ago

@progrium for us it actually is about portless services. Existence vs nonexistence

progrium commented 9 years ago

So you want to register the IP of the container regardless, not any ports. Correct?

starkovv commented 9 years ago

@progrium That is exactly what needed!

cultureulterior commented 9 years ago

@progrium: yes

SpComb commented 9 years ago

I've experimented with something very similar to this with -internal addressing and skydns, using a modified registrator to register skydns2 A/PTR records for containers.

I think the term "portless" is a bit misleading here from a design perspective, and specifically, I'm not convinced the design in PR #186 really solves the problem, handling some docker containers as a special "portless" service and others as a set of services.

Instead, what I ended up doing was modifying the Bridge and RegistryAdapter to handle both "hosts" and "services" separately. Each docker container would be registered as a Host (registering A and PTR records in skydns2, ignored by other backends), and then each docker port would be registered as a Service (registering SRV records in skydns2 referencing the A record).

I think this kind of host-vs-service separation would also resolve this issue? You would end up registering a host without any services. Normal behavior would be to register a host plus a number of services.

Unfortunately this was a while ago against an old master, and I don't have any PR for this. But some food for design-thought. I dunno exactly how this should play out when using non-internal addressing, though.