Closed fernandohackbart closed 6 years ago
The content of my config map
apiVersion: v1
data:
192.168.40.81: default/echoheaders
kind: ConfigMap
metadata:
annotations:
k8s.co/cloud-provider-config: '{"services":[{"uid":"ecf6d112-f4a5-11e7-b051-525400e28760","ip":"192.168.40.81"}]}'
creationTimestamp: 2018-01-08T19:04:31Z
name: vip-configmap
namespace: kube-system
resourceVersion: "1665"
selfLink: /api/v1/namespaces/kube-system/configmaps/vip-configmap
uid: c1c5917b-f4a6-11e7-b051-525400e28760
The daemon creation file
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
name: kube-keepalived-vip
name: kube-keepalived-vip
namespace: kube-system
spec:
template:
metadata:
labels:
name: kube-keepalived-vip
spec:
hostNetwork: true
containers:
- image: gcr.io/google_containers/kube-keepalived-vip:0.11
name: kube-keepalived-vip
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: modules
readOnly: true
- mountPath: /dev
name: dev
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# to use unicast
args:
- --services-configmap=kube-system/vip-configmap
# unicast uses the ip of the nodes instead of multicast
# this is useful if running in cloud providers (like AWS)
#- --use-unicast=true
# vrrp version can be set to 2. Default 3.
#- --vrrp-version=2
volumes:
- name: modules
hostPath:
path: /lib/modules
- name: dev
hostPath:
path: /dev
nodeSelector:
node-role.kubernetes.io/node: worker
Using aledbf/kube-keepalived-vip:0.23
instead of gcr.io/google_containers/kube-keepalived-vip:0.11
Hi @fernandohackbart , may I see your working vip-configmap and daemonset? somehow mine didnt work, although I tried to follow guidance from readme
Sure, sample application:
---
apiVersion: v1
kind: ReplicationController
metadata:
name: echoheaders
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
spec:
containers:
- name: echoheaders
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoheaders-service
labels:
app: echoheaders
annotations:
k8s.co/keepalived-forward-method: DR
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
name: http
selector:
app: echoheaders
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vip-configmap
namespace: kube-system
data:
192.168.40.81: default/echoheaders-service
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
name: kube-keepalived-vip
name: kube-keepalived-vip
namespace: kube-system
spec:
template:
metadata:
labels:
name: kube-keepalived-vip
spec:
hostNetwork: true
containers:
#- image: gcr.io/google_containers/kube-keepalived-vip:0.11
- image: aledbf/kube-keepalived-vip:0.23
name: kube-keepalived-vip
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: modules
readOnly: true
- mountPath: /dev
name: dev
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# to use unicast
args:
- --services-configmap=kube-system/vip-configmap
# unicast uses the ip of the nodes instead of multicast
# this is useful if running in cloud providers (like AWS)
#- --use-unicast=true
# vrrp version can be set to 2. Default 3.
#- --vrrp-version=2
volumes:
- name: modules
hostPath:
path: /lib/modules
- name: dev
hostPath:
path: /dev
nodeSelector:
node-role.kubernetes.io/node: worker
I have also labeled the nodes with:
kubectl get nodes |grep -v master|grep Ready|awk '{print $1}' |while read line
do
kubectl label node $line node-role.kubernetes.io/node=worker
done
I've had the same issue and it was due to the fact that I've had no "echoheaders" service