spinnaker / spinnaker

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
http://www.spinnaker.io/
Apache License 2.0
9.33k stars 1.21k forks source link

support lbaas v2 for openstack in clouddriver - atomic operations #977

Closed ashleycutalo closed 8 years ago

ashleycutalo commented 8 years ago
emjburns commented 8 years ago

Asked about support in google group: https://groups.google.com/forum/#!topic/openstack4j/uwWJgHIfQn8

derekolk commented 8 years ago

Moving to lbaasv2 can dramatically change the way we create load balancers and align it better with how Spinnaker and other providers (i.e. AWS) have implemented this as well. Here are my thoughts on how the functionality would change and would like feedback.

Create Load Balancer (https://slack-files.com/T091CRSGH-F1YDUURK9-116915d107)

NOTE - A default security group will be created based upon listener rules and applied to the load balancer to allow traffic to flow properly

Update Load Balancer (https://slack-files.com/T091CRSGH-F1YDY3VB3-d5f68634f0)

NOTE - Removing all security groups will result in the system selecting the default security group created during the create phase. Also, the default security group does NOT get updated when listener rules change during the update use case. They will remain as defined during the original create.

Please review @jcwest @drmaas @emjburns @varikin @pravka @gkuchta

derekolk commented 8 years ago

My initial contract in cloud driver would look something like this ...

{ "upsertLoadBalancer": { "id": "4e0d0b4b-8089-4703-af99-b6a0c90fbbc7", "name": "test", "vipSubnetId" : "4e0d0b4b-8089-4703-af99-b6a0c90fbbc7", // The network on which to allocate the load balancer's vip address "networkId": "4e0d0b4b-8089-4703-af99-b6a0c90fbbc7", // Used to create a floating IP address "algorithm": "ROUND_ROBIN", // ROUND_ROBIN, LEAST_CONNECTIONS, SOURCE_IP "securityGroups" : ["4e0d0b4b-8089-4703-af99-b6a0c90fbbc7"], // Associated security groups "listeners": [ { "externalProtocol": "HTTP", // Supports HTTP/HTTPS/TCP/TERMINATED_HTTPS "extenalPort": "10", "internalProtocol": "HTTP", "internalPort": "10", "sslCertificateId": "" // Need to check - Barbican certificate container id or certificate names } ], "healthMonitor": { "type": "PING", // PING/TCP/HTTP/HTTPS "delay": "10", // The interval in seconds between health checks. "timeout": "10", // The time in seconds that a health check times out. "maxRetries": "10", // Number of failed health checks before marked as OFFLINE. "httpMethod": "GET", // USED only for HTTP and HTTPS monitors "url": "/healthCheck" // USED only for HTTP and HTTPS monitors "expectedCodes" : [200, 201, 202] // USED only for HTTP and HTTPS monitors }, "session_persistence": { "type": "APP_COOKIE", "cookie_name": "my_cookie" // Only for APP_COOKIE } } }

emjburns commented 8 years ago

User should be able to update lb algorithm and session persistence. Other than that everything you mentioned looks good.

jcwest commented 8 years ago

I assume that a separate pool would be allocated for each unique internal port + protocol pair and that all pools would be allocated with the same distribution algorithm and health check. This would make it impossible to have different distribution algorithms or different health checks for different pools. This could be severely limiting if the user is trying to create a minimal setup where a single load balancer is used for many services.

We could alleviate this by modelling the pools separately:

{ ... "listeners": [ { "protocol": "HTTPS", // Supports HTTP/HTTPS/TCP/TERMINATED_HTTPS "port": 10, "sslCertificateId": "..." } ], "pools":[ { "protocol": "HTTP", "port": "10", "algorithm": "ROUND_ROBIN", // ROUND_ROBIN, LEAST_CONNECTIONS, SOURCE_IP "listenerPorts":[10], //provides the binding information to defined listeners "healthMonitor":{...} } ], ... }

drmaas commented 8 years ago

@jcwest In Liberty, each listener must have a dedicated pool, with it's own health check and members. I think the same health check could be applied to multiple pools.

Each pool will use the protocol and listen on the port specified by its listener.

We will therefore end of with a lot of pools, but this is how liberty was designed to be used.

In Mitaka a pool can be shared across all listeners in a load balancer.

jcwest commented 8 years ago

Given that Mitaka will/does support pool sharing, shouldn't we strive to put in place a model that will be easily extended to support that (if not add support for that now)?

I agree that ending with a lot of pools/listeners is unavoidable (you will have at least 1 pool per service and pool sharing will allow multiple listeners per pool), but having a lot of LBs probably isn't OK. The difference is that listeners result in port allocations/reservations on the same IP while each LB requires a separate IP (or two for floating IP). One thing concerns me with the healthMonitor... that will likely vary by service, so it needs to be defined per pool. Having one healthMonitor definition for the entire LB (i.e., all its pools) means that we would need to allocate a new LB for each healthMonitor pattern, burning another IP address.

If there is a concern about having multiple redundant health monitors defined in openstack, we can optimize that on the back-end... perform a search for a monitor with the matching settings and use the existing one, if found, or else create one.

Conceptually, a spinnaker LB is more equivalent to an openstack LB pool than an openstack LB (VIP). When creating a server group in the spinnaker, we identify a "load balancer" to use, but in openstack we need to bind the instances to a pool, not just a LB/VIP. Extending on that idea, in the LB create UI we can give the user the option to bind it to an existing LB interface (just allocate a port) or create a new one (allocate new IP addresses - one private and an optional public/floating IP). This would be part of the "location" and would not be editable. In terms of the model, the UI would provide either a vipSubnet + allocateFloatingIp flag or a vipId during the create operation (in reality, the UI would probably always supply the former, but the latter would be undefined in the case of allocating a new VIP). The expensive part of this approach is that we would need to provide an API to list the existing LB interfaces (VIPs), but we can probably finesse that in the UI by scraping the information from the existing LB listing.

derekolk commented 8 years ago

UPDATE ... I talked with @jcwest and @varikin today about the concerns above and summarized the concerns to the following:

What is the correct granularity of a load balancer?

Should we support a single health check across listeners or a health check for each listener?

I am going to discuss with our product owner @pravka

systembell commented 8 years ago

@derekolk At a minimum, I would expect to be able to select one or more load balancers when creating a server group; that load balancer will be used by each server group that is subsequently created (akin to the way the AWS/GCP drivers work).

In terms of health checks, a single check across listeners will suffice, but the only reason I have to back that up is that that's how most of the other drivers expose health check management in Deck (so I'd like to keep the behavior in line w/ how they function).

drmaas commented 8 years ago

@pravka @derekolk

Another consideration is when load balancers are edited to add/remove listeners, the corresponding ASG’s (stacks) that use the load balancer will need to be updated as well with the appropriate pool memberships (since listeners cannot share a pool in liberty … as they can in mitaka). This is feasible (via OS::Heat::ResourceGroup for pool members), but will require a bit of work.

jcwest commented 8 years ago

I agree with @pravka. It looks like the original approach of having a spinnaker LB == openstack LB (and using a new IP for each spinnaker LB) is in line with the other clould provider implementations (I looked at AWS, Azure, and GCE implementations), but we do need to allow for multiple load balancers to be associated with a server group.

Consequently, the model originally proposed by @derekolk is fine.

derekolk commented 8 years ago

@pravka @jcwest Thanks for the feedback! Sounds like everyone is fine with my original analysis and will continue down that path :)