yyyar / gobetween

:cloud: Modern & minimalistic load balancer for the Сloud era
http://gobetween.io
Other
1.94k stars 211 forks source link

Support multiple discoveries per server #112

Open jtopjian opened 7 years ago

jtopjian commented 7 years ago

Hi all,

Now that 0.5.0 has been released with LXD support, I wanted to revisit adding the ability to support multiple LXD plugins per server entry in gobetween. I created an example of this in https://github.com/yyyar/gobetween/pull/90 but closed it because I felt the design needed further discussion.

This is not specific to LXD, though. The reason I want to first discuss this is because the design will probably set precedence for any other plugin that wants to do the same.

Problem Description

Right now, when configuring a discovery method in a server, gobetween only supports a single discovery:

  [servers.example.discovery]
  kind = "docker"
  docker_endpoint = "http://localhost:2375"
  [servers.example.discovery]
  kind = "json"
  json_endpoint = "http://localhost:8080"
  [servers.example.discovery]
  kind = "plaintext"
  plaintext_endpoint = "http://some.url.com"
  [servers.example.discovery]
  consul_host = "localhost:8500"
  [servers.example.discovery]
  kind = "lxd"
  lxd_server_address = "https://lxd-01.example.com:8443"   

For most of the plugins/drivers, a single discovery is acceptable. For example, plaintext, json, and consul can all contain data of backends which reside across multiple pieces of infrastructure (server, vm, containers, etc):

+---+    +---+     +---+
| G +--> | D +---> | B |
+---+    +-+-+     +---+
           |
           |       +---+
           +-----> | B |
           |       +---+
           |
           |       +---+
           +-----> | B |
                   +---+

But other discovery methods (LXD, Docker (#5), and API) can only retrieve information from a single piece of infrastructure:

+---+    +-------+
| G +--> | D     |
+---+    |       |
         |    B  |
         |       |
         |    B  |
         |       |
         |    B  |
         +-------+

This effectively puts gobetween in a one-to-one relationship, which can be very limiting.

Use Case

By having gobetween be able to query multiple discoveries, the one-to-one relationship can now be a one-to-many:

+---+    +-------+
| G +--> | D     |
+-+-+    |       |
  |      |    B  |
  |      |       |
  |      |    B  |
  |      |       |
  |      |    B  |
  |      +-------+
  |
  |      +-------+
  +----> | D     |
         |       |
         |    B  |
         |       |
         |    B  |
         +-------+

Depending on how this is implemented, not only would gobetween be able to query multiple endpoints of a specific discovery, but it could also query multiple different discoveries.

Possible Solutions

  1. Use json, plaintext, consul, dns, etc instead.

Using LXD as an example, when someone wants gobetween to talk to multiple LXD servers, stop using the LXD discovery and use json, plaintext, etc instead.

This is certainly a possible solution, but in my opinion, it's kind of weak.

  1. Make DiscoveryConfig a slice/array.

By doing this, multiple [servers.example.discovery] entries can be made in gobetween.toml. This solution is simple in theory but would require a large amount of changes internally. However, it would enable a lot of flexibility in what gobetween can do.

  1. Handle this on a per-discovery basis.

Rather than implement a global solution, this would be dealt with on a per-discovery basis. Examples of this can be seen in https://github.com/yyyar/gobetween/pull/90. By doing this, gobetween would still be limited to one discovery per server, but at least now certain discovery plugins are no longer limited to the one-to-one relationship.

I'm interested to hear your thoughts on this. Additionally, I would be happy to help any way I can.

nickdoikov commented 7 years ago

Hello @jtopjian I thought about this feature several times last year . But there`s some questions that need to be discovered before implementation will start .

How about consistency ? Let`s assume that we have this feature, but all discoveries can and will respond asynchronous, some of them can fail , also it will not possible to apply failpolicy to each sub-discovery , otherwise total merged backend list can be partially inconsistent. This more about the logic then about implementation.

There are many questions related to this feature . Let`s discuss them here. @yyyar @illarion

jtopjian commented 7 years ago

@nickdoikov All good points.

Let`s assume that we have this feature, but all discoveries can and will respond asynchronous,

Agreed. Multiple discoveries means distributed communication which means asynchronous which means anything can happen at any time. :)

also it will not possible to apply failpolicy to each sub-discovery

The issue of repeatable configuration is a problem I first ran into with https://github.com/yyyar/gobetween/pull/90 (where I gave 4 examples of how to implement the configuration). By simply allowing all configuration to be repeated, it's the simplest solution.

But you are absolutely correct that certain items like failpolicy should not be repeated.

How about if there were global/master discovery configurations (which would be applied to all discoveries) and then per-discovery configurations (which would only handle the connection to the specific discovery).

If multiple discoveries are used, the simplest design would be to enforce setempty on a failed discovery. This would mark the entire discovery as untrustworthy/unhealthy.

What if a discovery is so important that it can't be setempty? Make the discovery endpoint fault-tolerant so that communication is never interrupted.

What if a discovery is known to fail? Add multiple discoveries so they can safely fail.

setempty by default makes sense in a lot of ways and closely follows modern design of expecting failure, but I admit that I might be missing important pieces.

This more about the logic then about implementation. There are many questions related to this feature . Let`s discuss them here.

Agreed - this is why I closed #90 (and technically I could close #95 because it requires the same feature). This feature deserves careful thought and consideration.

nickdoikov commented 7 years ago

Probably we need to limit failpolicy to each sub discovery section and leave multi discovery to currently discovered and merged list of backends. Also we need to prevent single backend server:port to be added twice or more time from different sub discoveries(this is possible in some point).

@yyyar what are you think about this?

jtopjian commented 7 years ago

@nickdoikov What do you mean by "sub discovery" and "multi discovery"? I might have the same concepts in mind, but might be thinking of them in different names.

Also we need to prevent single backend server:port to be added twice or more time from different sub discoveries(this is possible in some point).

Agreed.

yyyar commented 7 years ago

Hi Everyone! So what we can do is to introduce another discovery type: "composite", or "mixed", that holds a list of other discoveries. It would be backward compatible and will reuse most code we have.

[server.sample]
bind="0.0.0.0:3000"
protocol="tcp"

    [server.sample.discovery]
    kind="mixed"

        # ok, we want a list of different discoveries :-)

        [[servers.sample.discovery.mixed]]
        kind = "static"
        static_list = [
            "localhost:8000",
            "localhost:8001"
        ]

        [[servers.sample.discovery.mixed]]
        kind = "docker"
        docker_endpoint = "http://localhost:2375"
        docker_container_private_port = 80

        # ... other

So it will allow simultaneous usage of many discoveries of different types (or even of the same type) at the same time.

Mixed discovery will have it's own failpolicy and other properties, as well as it's child discoveries. It's interval and timeout may be used as default values for child discoveries if they won't override these values. Mixed discovery will get backends from child discoveries, merge them, and resolve backends conflicts and/or duplicates.

What do you think?

jtopjian commented 7 years ago

@yyyar It's a pretty clever solution :)

I like that it's able to keep the current plugin architecture and retain backwards compatibility.

Since one goal is to be able to define multiple same discoveries, would it be better to name it "multi"?

[server.sample]
bind="0.0.0.0:3000"
protocol="tcp"

    [server.sample.discovery]
    kind="multi"

        [[servers.example.discovery.multi]]
        kind = "lxd"
        lxd_server_address = "https://lxd-01.example.com:8443"   

        [[servers.example.discovery.multi]]
        kind = "lxd"
        lxd_server_address = "https://lxd-02.example.com:8443"   

        [[servers.sample.discovery.multi]]
        kind = "docker"
        docker_endpoint = "http://localhost:2375"
        docker_container_private_port = 80
yyyar commented 7 years ago

@jtopjian thanks! yep, "multi" sounds good! :-) I'll try to make a proof of concept of this as soon as I'll have some time.

@nickdoikov @illarion your comments please :-)

zacksiri commented 6 years ago

Have you guys finalized this yet? Or is this still in proof of concept mode? I have some suggestions, I'm currently using LXD with gobetween and it works great. However like this issue states it's a 1 - 1 relationship with my lxd installation. However I think it's a good idea to have 1 - 1 relationship with lxd. Right now I just route all my traffic to gobetween and gobetween distribute traffic based on SNI,

How would his work with multiple discoveries? i mean now the configuration difficulty for gobetween / lxd / node networking will be very difficult. Now we're talking cross-node networking. I think it would be best to avoid that. no?

I saw that you guys are thinking about something where you have multiple gobetween instances and route between them. think what is proposed in #77 is pretty smart. This keeps things simple configuration wise.

jtopjian commented 6 years ago

@zacksiri

How would his work with multiple discoveries? i mean now the configuration difficulty for gobetween / lxd / node networking will be very difficult. Now we're talking cross-node networking. I think it would be best to avoid that. no?

If I understand correctly, you're talking about having gobetween run locally on an LXD server while it simultaneously communicates with other LXD servers? Technically as long as the LXD containers are accessible to other LXD servers (you've configured your containers on an appropriately accessible bridge) then all should be fine.

But if you are doing NAT to your containers then you're right: it gets very complicated. In that situation, the proposed solution in #77 would work best.

(One thing that's worth mentioning here is that LXD now natively supports port-forwarding, so if you are running gobetween to only forward traffic from a public network to private LXD containers, you no longer need to do that)

IMO, the best way of implementing this proposed multi-backend discovery is to keep gobetween on a dedicated host/vm/container and have it communicate with items on other hosts/vms/containers. Running gobetween on the same host is great for simple solutions and provides a programmatic/API-driven way to configure NAT, but anything beyond that gets complicated.

I hope that helps?

zacksiri commented 6 years ago

I just read about lxd support for port forwarding. It’s doing it through a setting up a proxy device. However I’m curious, isn’t setting up gobetween locally already like setting up a shared proxy device across all nodes on a specific host? Where gobetween just routes traffic between multiple nodes. That means we also have to expose many ports on a host. So theoretically lxd’s proxy device is not better than setting up a local gobetween instance. Where you only expose 1 port, and there’s is no mucking abou with deciding which port to map too.

It might be worth it to have a solution where gobetween instances can discover other gobetween nodes, like a dispatcher node and all domains can point to the dispatcher and all the dispatcher does is raw tcp routing between the different gobetween nodes that provide all the tls + sni

I just think the setup in #77 seem to be simpler even with lxd’s native port forwarding with proxy devices.

jtopjian commented 6 years ago

However I’m curious, isn’t setting up gobetween locally already like setting up a shared proxy device across all nodes?

Yes, but one thing to keep in mind is that LXD's port forwarding has only been available for the past couple of weeks.

You're correct about there being differences between LXD's new proxy device and gobetween - there are applicable cases where one is preferred over the other. I only wanted to mention the proxy device in case it simplified your configuration :)

It might be worth it to have a solution where gobetween instances can discover other gobetween nodes

I agree. To be clear: I think the API discovery in #77 is still a valid thing for a whole bunch of use-cases. This issue (#112) is more of a high-level design/implementation for things like #77 and multi-server LXD.

zacksiri commented 6 years ago

I agree. To be clear: I think the API discovery in #77 is still a valid thing for a whole bunch of use-cases. This issue (#112) is more of a high-level design/implementation for things like #77 and multi-server LXD.

Ok makes sense. I think I’m quite settled on my architecture. #77 will probably work better. Seems like much better use of resources.

Now I just have to figure out how to get the dispatcher to detect services from multiple gobetween instances. Sni is probably required to detect service name. So both the dispatcher and local gobetween will have to be configured to support tls + sni. Hmm... I guess it’s not too bad if wildcard ssls are used.

nickdoikov commented 3 years ago

let's bring this discussion back to life %) @illarion @yyyar WDYT about Yaroslav's solution implementation?