Closed zetaab closed 3 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST?: feature
/kind feature
What happened: Now when octavia do have AZ support when provisioning loadbalancers, could we support somekind of topologies when doing loadbalancers? Example: 3 network AZs without L2 between them, which means that floatingips does not move between zones.
What you expected to happen:
I have been thinking to situation that when we provision loadbalancer, it could automatically create loadbalancers to each zone in background (means that instead of 1 IP we do have 3 IPs with 3 zones). My plan was that we could use similar stuff that ELB does: it creates dns name against these zone loadbalancer IPs and monitor them, and remove IPs if the zone is down (TTL is 60seconds). The problem is that at least we would like to use external dns to this instead of designate.