Closed fahadshery closed 4 months ago
@fahadshery Hi, refer to the K3s docs: https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking You have to enable IPv6 when installing K3s.
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.29.4+k3s1 sh -s - --write-kubeconfig-mode 644 --cluster-cidr=10.42.0.0/16,2001:cafe:42::/56 --service-cidr=10.43.0.0/16,2001:cafe:43::/112
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.29.4+k3s1 sh -s - --write-kubeconfig-mode 644 --cluster-cidr=10.42.0.0/16,2001:cafe:42::/56 --service-cidr=10.43.0.0/16,2001:cafe:43::/112
what's the best way to do it now? I have already setup my awx instance. People have setup their credentials, projects and 100s of templates. I have some nodes that need to be managed via awx who are only accessible via ipv6 now ...hence this question.. :(
Oh too bad :( Ideas;
Hi,
I enabled the Dual stack
as you mentioned but postgres
pod is not working...
Here is the output:
[root@awx-vm awx-on-k3s]# kubectl describe pod awx-postgres-15-0 -n awx Name: awx-postgres-15-0 Namespace: awx Priority: 0 Service Account: default Node: awx-vm/IPVv4 Address Start Time: Thu, 16 May 2024 14:35:23 +01[00 Labels: [app.kubernetes.io/component=database](http://app.kubernetes.io/component=database) [app.kubernetes.io/instance=postgres-15-awx](http://app.kubernetes.io/instance=postgres-15-awx) [app.kubernetes.io/managed-by=awx-operator](http://app.kubernetes.io/managed-by=awx-operator) [app.kubernetes.io/name=postgres-15](http://app.kubernetes.io/name=postgres-15) [app.kubernetes.io/part-of=awx](http://app.kubernetes.io/part-of=awx) [apps.kubernetes.io/pod-index=0](http://apps.kubernetes.io/pod-index=0) controller-revision-hash=awx-postgres-15-7cfb7786c4 [statefulset.kubernetes.io/pod-name=awx-postgres-15-0](http://statefulset.kubernetes.io/pod-name=awx-postgres-15-0) Annotations: <none> Status: Running IP: 10.42.0.28 IPs: IP: 10.42.0.28 IP: 2001:cafe:42::1c Controlled By: StatefulSet/awx-postgres-15 Containers: postgres: Container ID: containerd://21720f805cabcac99703dbdd12eb0ff11b2ed192d6752cf7a79d629fa8e45754 Image: [quay.io/sclorg/postgresql-15-c9s:latest](http://quay.io/sclorg/postgresql-15-c9s:latest) Image ID: [quay.io/sclorg/postgresql-15-c9s@sha256:b12e2a83a61bfec5873fcb1e88a16bacb746ecf235c2701ad81880d4b81d5380](mailto:quay.io/sclorg/postgresql-15-c9s@sha256:b12e2a83a61bfec5873fcb1e88a16bacb746ecf235c2701ad81880d4b81d5380) Port: 5432/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 16 May 2024 14:36:51 +0100 Finished: Thu, 16 May 2024 14:36:51 +0100
Ready: False [12/1948] Restart Count: 4
Environment:
POSTGRESQL_DATABASE: <set to the key 'database' in secret 'awx-postgres-configuration'> Optional: false
POSTGRESQL_USER: <set to the key 'username' in secret 'awx-postgres-configuration'> Optional: false
POSTGRESQL_PASSWORD: <set to the key 'password' in secret 'awx-postgres-configuration'> Optional: false
POSTGRES_DB: <set to the key 'database' in secret 'awx-postgres-configuration'> Optional: false
POSTGRES_USER: <set to the key 'username' in secret 'awx-postgres-configuration'> Optional: false
POSTGRES_PASSWORD: <set to the key 'password' in secret 'awx-postgres-configuration'> Optional: false
PGDATA: /var/lib/pgsql/data/userdata
POSTGRES_INITDB_ARGS: --auth-host=scram-sha-256
POSTGRES_HOST_AUTH_METHOD: scram-sha-256
Mounts:
/var/lib/pgsql/data from postgres-15 (rw,path="data")
/var/run/secrets/[kubernetes.io/serviceaccount](http://kubernetes.io/serviceaccount) from kube-api-access-cs7tk (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
postgres-15:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-15-awx-postgres-15-0
ReadOnly: false
kube-api-access-cs7tk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: [node.kubernetes.io/not-ready:NoExecute](http://node.kubernetes.io/not-ready:NoExecute) op=Exists for 300s
[node.kubernetes.io/unreachable:NoExecute](http://node.kubernetes.io/unreachable:NoExecute) op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m7s default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Scheduled 2m6s default-scheduler Successfully assigned awx/awx-postgres-15-0 to blp20504034
Normal Pulled 38s (x5 over 2m6s) kubelet Container image "[quay.io/sclorg/postgresql-15-c9s:latest](http://quay.io/sclorg/postgresql-15-c9s:latest)" already present on machine Normal Created 38s (x5 over 2m5s) kubelet Created container postgres
Normal Started 38s (x5 over 2m5s) kubelet Started container postgres
Warning BackOff 8s (x10 over 2m4s) kubelet Back-off restarting failed container postgres in pod awx-postgres-15-0_awx(5ef6907f-2201-4721-bafc-0fd20d82365a)
ignore the above msg...
I included the line postgres_data_volume_init: true
in my base/awx.yaml
file and it fixed it.
Keeping it open just now to make sure IPv6 works
This issue is stale because it has been open 10 days with no activity. Remove stale label or comment or this will be closed in 4 days.
This issue was closed because it has been open 2 weeks with no activity.
Hi,
I enabled IPv6 in a dual stack configuration.
NAME READY STATUS RESTARTS AGE
pod/awx-operator-controller-manager-66d859dc9f-djtwf 2/2 Running 2 (17d ago) 17d
pod/awx-postgres-15-0 1/1 Running 0 17d
pod/awx-web-6d4cf99b8-wrvhw 3/3 Running 0 17d
pod/awx-migration-24.3.1-ltm83 0/1 Completed 0 17d
pod/awx-task-58786fc688-cvb8m 4/4 Running 0 17d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/awx-operator-controller-manager-metrics-service ClusterIP 10.43.36.157 <none> 8443/TCP 17d
service/awx-postgres-15 ClusterIP None <none> 5432/TCP 17d
service/awx-service ClusterIP 10.43.207.5 <none> 80/TCP 17d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/awx-operator-controller-manager 1/1 1 1 17d
deployment.apps/awx-web 1/1 1 1 17d
deployment.apps/awx-task 1/1 1 1 17d
NAME DESIRED CURRENT READY AGE
replicaset.apps/awx-operator-controller-manager-66d859dc9f 1 1 1 17d
replicaset.apps/awx-web-6d4cf99b8 1 1 1 17d
replicaset.apps/awx-task-58786fc688 1 1 1 17d
NAME READY AGE
statefulset.apps/awx-postgres-15 1/1 17d
NAME COMPLETIONS DURATION AGE
job.batch/awx-migration-24.3.1 1/1 18s 17d
NAME CLASS HOSTS ADDRESS PORTS AGE
[ingress.networking.k8s.io/awx-ingress](http://ingress.networking.k8s.io/awx-ingress) traefik [mytower.example.com](http://mytower.example.com/) X.X.X.X,IPv6::7 80, 443 17d
I can ping the IPv6 address from the VM...but can't access the IPv6 target
when running my job
...
How do I test if IPv6 is enabled and accessible within the Pod
?
Environment
k3s version v1.29.3+k3s1 (8aecc26b) go version go1.21.8
Description
I have some nodes that are only accessible via IPv6. I get the following error when running playbooks against them:
Step to Reproduce
All pods are up and running.
I am able to ping the the IPv6 node from the host machine running AWX