Closed UltraInstinct14 closed 3 months ago
ElasticIP needs to be reassociated to active EC2 instance. For fullNAT mode to work, a private CIDR needs to be associated with loxi instances. The privateCIDR also needs to migrate to active VPC.
Overall pattern is as follows -
The following is an example HA configuration. Kindly change the instance's IP and subnet settings as per need.
VPC CIDR: 192.168.0.0/16
loxilb instance1: 192.168.218.87
loxilb instance2: 192.168.228.79
Elastic IP: 15.168.149.225
private subnet: 192.168.248.0/24
private IP associated with EIP: 192.168.248.254
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml
spec:
containers:
- name: kube-loxilb
image: ghcr.io/loxilb-io/kube-loxilb:aws-support
imagePullPolicy: Always
command:
- /bin/kube-loxilb
- args:
- --loxiURL=http://192.168.228.79:11111,http://192.168.218.87:11111
- --externalCIDR=15.168.149.225/32
- --privateCIDR=192.168.248.254/32
- --setRoles=0.0.0.0
- --setLBMode=2
In loxiURL, specify loxilb 1 & 2 instance's IP.
In externalCIDR, specify the Elastic IP to use for external access(netmask must be set to 32 currently).
In privateCIDR, specify the private IP to be associated with Elastic IP(netmask must be set to 32 currently).
loxilb1:
sudo docker run -u root --cap-add SYS_ADMIN \
--restart unless-stopped \
--net=host \
--privileged \
-dit \
-v /dev/log:/dev/log \
-e AWS_REGION=ap-northeast-3 \
--name loxilb \
ghcr.io/loxilb-io/loxilb:aws-support --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.228.79 --self=0
In --cloudcidrblock option, specify the CIDR of your private subnet.
In --cluster option, specify IP address of loxilb2 instance
In --self option, set 0 for loxilb1 and 1 for loxilb2.
loxilb2:
sudo docker run -u root --cap-add SYS_ADMIN \
--restart unless-stopped \
--net=host \
--privileged \
-dit \
-v /dev/log:/dev/log \
-e AWS_REGION=ap-northeast-3 \
--name loxilb \
ghcr.io/loxilb-io/loxilb:aws-support --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.87 --self=1
Multi-VPC support is yet to be validated hence currently limited to multi-AZ in same VPC !!
Is your feature request related to a problem? Please describe.
If loxilb runs in two instances with each instance in a different VPC or AZ, currently the same VIP for communication can't be maintained.
Describe the solution you'd like loxilb instances should be able to run in different VPCs/AZ with the same VIP CIDR
Describe alternatives you've considered N/A
Additional context
There is high-level AWS design pattern how this could be achieved.