Throughout Layer0's lifecycle, we've had discussions on how to improve the relationship between loadbalancers (ELBs) and services from the CLI. The workflow is currently:
create a deploy (that has most of the ports you need already defined in it)
create a loadbalancer with all ports again supplied on command line
create a service that has to be tied specifically to the loadbalancer you just created
This workflow must be completed in order, as a service must have a loadbalancer already available at creation time.
While it would be better UX to simply derive the ELB configuration from a deploy, the problem is that an ELB's ports have a relationship that can't be obtained solely from a deploy / task definition. The relationship works out to be public elb port -> hostPort -> containerPort, and only the later two ports are defined in a deploy. As such, we have to force the user to supply that final mapping of public elb port -> hostPort.
Throughout Layer0's lifecycle, we've had discussions on how to improve the relationship between loadbalancers (ELBs) and services from the CLI. The workflow is currently:
deploy
(that has most of the ports you need already defined in it)loadbalancer
with all ports again supplied on command lineservice
that has to be tied specifically to theloadbalancer
you just createdThis workflow must be completed in order, as a
service
must have aloadbalancer
already available at creation time.While it would be better UX to simply derive the ELB configuration from a
deploy
, the problem is that an ELB's ports have a relationship that can't be obtained solely from adeploy
/ task definition. The relationship works out to bepublic elb port -> hostPort -> containerPort
, and only the later two ports are defined in adeploy
. As such, we have to force the user to supply that final mapping ofpublic elb port -> hostPort
.