kumahq / kuma-counter-demo

This is the counter demo for Kuma, that demonstrates the capabilities of the service mesh.
Apache License 2.0
15 stars 17 forks source link

Unable to Start Counter Demo App on Windows+Docker+WSL #30

Closed iamsourabh-in closed 1 year ago

iamsourabh-in commented 2 years ago

What happened?

I am trying to get started with Kuma Service mesh, and faced an issue while working with the kuma-counter-demo

Environment Details :

Windows Docker Desktop with Linux containers WSL (Ubuntu)

I tried spinning up a new cluster with kind. and was able to install a install kuma control plane.

When I tried deploying the counter demo App.

The App was unable to start. Kuma Dashboard shows:

"Inbound port is not ready for the service" (I even tried with different ports.)

image

iamsourabh-in commented 2 years ago

Few Screenshot of the logs for sidecar and init

image

image

jakubdyszkiewicz commented 2 years ago

Hey,

what is your version of Ubuntu and Kind?

Triage: marking as needs reproducing to compose a ContainerPatch to disable conntrack fix. https://github.com/microsoft/WSL/issues/7547 https://github.com/microsoft/WSL/issues/7407

iamsourabh-in commented 2 years ago

Note : I am able to work get this up and running on ec2 on alpine linux.

Versions of Tools :

kind v0.15.0 go1.19 windows/amd64 kind image : kindest/node:v1.25.0 ubuntu: 22.04.1 LTS Docker Desktop 4.12.0 Version: 20.10.17 Windows 11

iamsourabh-in commented 2 years ago

Hi, Can someone help, why am I not able to run kuma demo app.

The App failed to start. Its the logs of the Init Container showing issues with iptable. kumactl ver 1.8.1


sourabh@DESKTOP-FIUUHBV:~$ git clone https://github.com/kumahq/kuma-counter-demo.git
Cloning into 'kuma-counter-demo'...
remote: Enumerating objects: 197, done.
remote: Counting objects: 100% (54/54), done.
remote: Compressing objects: 100% (34/34), done.
remote: Total 197 (delta 31), reused 26 (delta 20), pack-reused 143
Receiving objects: 100% (197/197), 107.72 KiB | 246.00 KiB/s, done.
Resolving deltas: 100% (99/99), done.
sourabh@DESKTOP-FIUUHBV:~$ ls
kuma-1.8.1  kuma-counter-demo
sourabh@DESKTOP-FIUUHBV:~$ cd kuma-
-bash: cd: kuma-: No such file or directory
sourabh@DESKTOP-FIUUHBV:~$ cd kuma-counter-demo/
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ ls
app                 demo-v2.yaml  GOVERNANCE.md  org_labels.yml  release
CODE_OF_CONDUCT.md  demo.yaml     kong.yaml      OWNERS.md       SECURITY.md
CODEOWNERS          gateway.yaml  LICENSE        README.md       yarn.lock
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl apply -f demo.yaml
namespace/kuma-demo created
deployment.apps/redis created
service/redis created
deployment.apps/demo-app created
service/demo-app created
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl get pods
No resources found in default namespace.
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS     RESTARTS      AGE
kube-system   coredns-95db45d46-v4c62                  1/1     Running    0             8m10s
kube-system   coredns-95db45d46-zrw4n                  1/1     Running    0             8m10s
kube-system   etcd-docker-desktop                      1/1     Running    31            8m14s
kube-system   kube-apiserver-docker-desktop            1/1     Running    32            8m7s
kube-system   kube-controller-manager-docker-desktop   1/1     Running    31            8m13s
kube-system   kube-proxy-cqxqs                         1/1     Running    0             8m10s
kube-system   kube-scheduler-docker-desktop            1/1     Running    37            8m10s
kube-system   storage-provisioner                      1/1     Running    0             8m4s
kube-system   vpnkit-controller                        1/1     Running    0             8m3s
kuma-demo     demo-app-b4f98898-m587h                  0/2     Init:0/1   2 (14s ago)   19s
kuma-demo     redis-8fcbfc795-7hk2k                    0/2     Init:0/1   2 (14s ago)   19s
kuma-system   kuma-control-plane-64d55468b-4ghgv       1/1     Running    0             117s
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl logs demo-app-b4f98898-m587h -n kuma-demo > logs.txt
Defaulted container "demo-app" out of: demo-app, kuma-sidecar, kuma-init (init)
Error from server (BadRequest): container "demo-app" in pod "demo-app-b4f98898-m587h" is waiting to start: PodInitializing
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS       RESTARTS      AGE
kube-system   coredns-95db45d46-v4c62                  1/1     Running      0             8m44s
kube-system   coredns-95db45d46-zrw4n                  1/1     Running      0             8m44s
kube-system   etcd-docker-desktop                      1/1     Running      31            8m48s
kube-system   kube-apiserver-docker-desktop            1/1     Running      32            8m41s
kube-system   kube-controller-manager-docker-desktop   1/1     Running      31            8m47s
kube-system   kube-proxy-cqxqs                         1/1     Running      0             8m44s
kube-system   kube-scheduler-docker-desktop            1/1     Running      37            8m44s
kube-system   storage-provisioner                      1/1     Running      0             8m38s
kube-system   vpnkit-controller                        1/1     Running      0             8m37s
kuma-demo     demo-app-b4f98898-m587h                  0/2     Init:Error   3 (34s ago)   53s
kuma-demo     redis-8fcbfc795-7hk2k                    0/2     Init:Error   3 (34s ago)   53s
kuma-system   kuma-control-plane-64d55468b-4ghgv       1/1     Running      0             2m31s
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl logs demo-app-b4f98898-m587h -n kuma-demo
Defaulted container "demo-app" out of: demo-app, kuma-sidecar, kuma-init (init)
Error from server (BadRequest): container "demo-app" in pod "demo-app-b4f98898-m587h" is waiting to start: PodInitializing
sourabh@DESKTOP-FIUUHBV:~/kuma-counter-demo$ kubectl logs demo-app-b4f98898-m587h kuma-init -n kuma-demo
Flag --skip-resolv-conf has been deprecated, we never change resolveConf so this flag has no effect, you can stop using it
iptables -t nat -D PREROUTING -p tcp -j MESH_INBOUND
iptables -t mangle -D PREROUTING -p tcp -j MESH_INBOUND
iptables -t nat -D OUTPUT -p tcp -j MESH_OUTPUT
iptables -t nat -F MESH_OUTPUT
iptables -t nat -X MESH_OUTPUT
iptables -t nat -F MESH_INBOUND
iptables -t nat -X MESH_INBOUND
iptables -t mangle -F MESH_INBOUND
iptables -t mangle -X MESH_INBOUND
iptables -t mangle -F MESH_DIVERT
iptables -t mangle -X MESH_DIVERT
iptables -t mangle -F MESH_TPROXY
iptables -t mangle -X MESH_TPROXY
iptables -t nat -F MESH_REDIRECT
iptables -t nat -X MESH_REDIRECT
iptables -t nat -F MESH_IN_REDIRECT
iptables -t nat -X MESH_IN_REDIRECT
ip6tables -t nat -D PREROUTING -p tcp -j MESH_INBOUND
ip6tables -t mangle -D PREROUTING -p tcp -j MESH_INBOUND
ip6tables -t nat -D OUTPUT -p tcp -j MESH_OUTPUT
ip6tables -t nat -F MESH_OUTPUT
ip6tables -t nat -X MESH_OUTPUT
ip6tables -t nat -F MESH_INBOUND
ip6tables -t nat -X MESH_INBOUND
ip6tables -t mangle -F MESH_INBOUND
ip6tables -t mangle -X MESH_INBOUND
ip6tables -t mangle -F MESH_DIVERT
ip6tables -t mangle -X MESH_DIVERT
ip6tables -t mangle -F MESH_TPROXY
ip6tables -t mangle -X MESH_TPROXY
ip6tables -t nat -F MESH_REDIRECT
ip6tables -t nat -X MESH_REDIRECT
ip6tables -t nat -F MESH_IN_REDIRECT
ip6tables -t nat -X MESH_IN_REDIRECT
iptables-save
# Generated by iptables-save v1.8.4 on Wed Nov  2 06:49:43 2022
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Wed Nov  2 06:49:43 2022
# Generated by iptables-save v1.8.4 on Wed Nov  2 06:49:43 2022
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Nov  2 06:49:43 2022
# Generated by iptables-save v1.8.4 on Wed Nov  2 06:49:43 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 5678 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 5678 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -j REDIRECT --to-ports 15053
COMMIT
# Completed on Wed Nov  2 06:49:43 2022
ip6tables-save
# Generated by ip6tables-save v1.8.4 on Wed Nov  2 06:49:43 2022
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Nov  2 06:49:43 2022
# Generated by ip6tables-save v1.8.4 on Wed Nov  2 06:49:43 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Nov  2 06:49:43 2022
kumactl is about to apply the iptables rules that will enable transparent proxying on the machine. The SSH connection may drop. If that happens, just reconnect again.
Environment:
------------
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
INBOUND_CAPTURE_PORT_V6=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_MARK=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
SKIP_CONNTRACK_ZONE_SPLIT=

Variables:
----------
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_INBOUND_CAPTURE_PORT_V6=15010
PROXY_TUNNEL_PORT=15008
PROXY_UID=5678
PROXY_GID=5678
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBEVIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=true
REDIRECT_ALL_DNS_TRAFFIC=true
DNS_SERVERS=[0.0.0.0],[::]
AGENT_DNS_LISTENER_PORT=15053
DNS_UPSTREAM_TARGET_CHAIN=RETURN
SKIP_DNS_CONNTRACK_ZONE_SPLIT=false

Writing following contents to rules file:  /tmp/iptables-rules-1667371783458305854.txt4051367543
* nat
-N MESH_INBOUND
-N MESH_REDIRECT
-N MESH_IN_REDIRECT
-N MESH_OUTPUT
-A MESH_INBOUND -p tcp --dport 15008 -j RETURN
-A MESH_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A MESH_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j MESH_INBOUND
-A MESH_INBOUND -p tcp --dport 22 -j RETURN
-A MESH_INBOUND -p tcp -j MESH_IN_REDIRECT
-A OUTPUT -p tcp -j MESH_OUTPUT
-A MESH_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A MESH_OUTPUT -o lo ! -d 127.0.0.1/32 -p tcp ! --dport 53 -m owner --uid-owner 5678 -j MESH_IN_REDIRECT
-A MESH_OUTPUT -o lo -p tcp ! --dport 53 -m owner ! --uid-owner 5678 -j RETURN
-A MESH_OUTPUT -m owner --uid-owner 5678 -j RETURN
-A MESH_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 5678 -j MESH_IN_REDIRECT
-A MESH_OUTPUT -o lo -p tcp ! --dport 53 -m owner ! --gid-owner 5678 -j RETURN
-A MESH_OUTPUT -m owner --gid-owner 5678 -j RETURN
-A MESH_OUTPUT -p tcp --dport 53 -j REDIRECT --to-ports 15053
-A MESH_OUTPUT -d 127.0.0.1/32 -j RETURN
-A MESH_OUTPUT -j MESH_REDIRECT
-I OUTPUT 1 -p udp --dport 53 -m owner --uid-owner 5678 -j RETURN
-I OUTPUT 2 -p udp --dport 53 -m owner --gid-owner 5678 -j RETURN
-I OUTPUT 3 -p udp --dport 53 -j REDIRECT --to-port 15053
COMMIT
* raw
-A OUTPUT -p udp --dport 53 -m owner --uid-owner 5678 -j CT --zone 1
-A OUTPUT -p udp --sport 15053 -m owner --uid-owner 5678 -j CT --zone 2
-A OUTPUT -p udp --dport 53 -m owner --gid-owner 5678 -j CT --zone 1
-A OUTPUT -p udp --sport 15053 -m owner --gid-owner 5678 -j CT --zone 2
-A OUTPUT -p udp --dport 53 -j CT --zone 2
-A PREROUTING -p udp --sport 53 -j CT --zone 1
COMMIT

iptables-restore --noflush /tmp/iptables-rules-1667371783458305854.txt4051367543
iptables-restore v1.8.4 (legacy): unknown option "--zone"
Error occurred at line: 28
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
watsonb commented 1 year ago

I, too, am running into a problem running the demo app. The problem also appears to be with IPTables.

kubectl logs redis-57b8b64dd4-z625l -c kuma-init -n kuma-demo

<snip for brevity>

COMMIT

iptables-restore --noflush /tmp/iptables-rules-1674167628384504598.txt3827216784
iptables-restore v1.8.7 (legacy): iptables-restore: unable to initialize table 'nat'

Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.

Running on a k8s cluster installed via Kubespray onto 6 RHEL8 hosts.

[ansible@dev03 kuma-counter-demo]$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:58:30Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:15:38Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.26) and server (1.24) exceeds the supported minor version skew of +/-1
watsonb commented 1 year ago

I eventually got mine working. First I tried temporarily disabling SELinux (from some post I found) on all of my k8s cluster hosts:

setenforce 0

That didn't fix it. Then I figured the iptables nat kernel module wasn't loaded. So again, on all k8s cluster hosts:

modprobe iptable_nat

all kuma-demo pods are now running

bartsmykla commented 1 year ago

@iamsourabh-in have you tried suggested by @watsonb solution?

bartsmykla commented 1 year ago

I'm closing the issue as there was no updates