Open Rahu2000 opened 1 year ago
Hi @Rahu2000 I don't have a WSL Machine on my hand, so I can't reproduce it on my side. I'm not familiar with WSL either, can you run kind clusters on it? You can try with the quick start of kind.
In addition, please give more details, especially for the output of hack/local-up-karmada.sh
.
@RainbowMango Sorry for the delay in getting back to you
hack/local-up-karmada.sh `hostname -I`
+ CGO_ENABLED=0
+ GOOS=linux
+ GOARCH=amd64
+ go build -ldflags '-X github.com/karmada-io/karmada/pkg/version.gitVersion=v1.4.0-257-gd1bad846 -X github.com/karmada-io/karmada/pkg/version.gitCommit=d1bad846a5142a3896de5e7517a8ec6bc0cb8345 -X github.com/karmada-io/karmada/pkg/version.gitTreeState=dirty -X github.com/karmada-io/karmada/pkg/version.buildDate=2023-02-14T01:39:26Z ' -o _output/bin/linux/amd64/karmada-aggregated-apiserver github.com/karmada-io/karmada/cmd/aggregated-apiserver
+ set +x
+ docker build --build-arg BINARY=karmada-aggregated-apiserver --tag docker.io/karmada/karmada-aggregated-apiserver:latest --file hack/../cluster/images/Dockerfile hack/../_output/bin/linux/amd64
[+] Building 107.3s (9/9) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 145B 0.1s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.17.1 102.9s
=> [auth] library/alpine:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.9s
=> => transferring context: 84.77MB 0.8s
=> [1/3] FROM docker.io/library/alpine:3.17.1@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a 1.2s
=> => resolve docker.io/library/alpine:3.17.1@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a 0.0s
=> => sha256:93d5a28ff72d288d69b5997b8ba47396d2cbb62a72b5d87cd3351094b5d578a0 528B / 528B 0.0s
=> => sha256:042a816809aac8d0f7d7cacac7965782ee2ecac3f21bcf9f24b1de1a7387b769 1.47kB / 1.47kB 0.0s
=> => sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9 3.37MB / 3.37MB 1.0s
=> => sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a 1.64kB / 1.64kB 0.0s
=> => extracting sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9 0.2s
=> [2/3] RUN apk add --no-cache ca-certificates 2.1s
=> [3/3] COPY karmada-aggregated-apiserver /bin/karmada-aggregated-apiserver 0.3s
=> exporting to image 0.3s
=> => exporting layers 0.3s
=> => writing image sha256:80f0eb82cb4b774c57679219ae87640d2bf92bd976a789cdc65bf75c2fbd619b 0.0s
=> => naming to docker.io/karmada/karmada-aggregated-apiserver:latest 0.0s
...
Preparing: 'kind' existence check - passed
Preparing: 'kubectl' existence check - passed
Preparing kindClusterConfig in path: /tmp/tmp.phdX8hiACf
Creating cluster karmada-host and the log file is in /tmp/karmada/karmada-host.log
Creating cluster member1 and the log file is in /tmp/karmada/member1.log
Creating cluster member2 and the log file is in /tmp/karmada/member2.log
Creating cluster member3 and the log file is in /tmp/karmada/member3.log
make: Entering directory '/mnt/c/Workspace/karmada'
...
set -e;\
target=$(echo karmada-operator);\
make $target GOOS=linux;\
VERSION=latest REGISTRY=docker.io/karmada BUILD_PLATFORMS=linux/amd64 hack/docker.sh $target
make[1]: Entering directory '/mnt/c/Workspace/karmada'
BUILD_PLATFORMS=linux/amd64 hack/build.sh karmada-operator
!!! Building karmada-operator for linux/amd64:
make[1]: Leaving directory '/mnt/c/Workspace/karmada'
Building image for linux/amd64: docker.io/karmada/karmada-operator:latest
make: Leaving directory '/mnt/c/Workspace/karmada'
Waiting for the host clusters to be ready...
Waiting for kubeconfig file /home/mzc/.kube/karmada.config and clusters karmada-host to be ready...
Context "kind-karmada-host" renamed to "karmada-host".
Cluster "kind-karmada-host" set.
Waiting for ok.......................................... ### <== waiting cluster
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <...>
server: https://172.18.0.5:6443
name: kind-karmada-host
contexts:
- context:
cluster: kind-karmada-host
user: kind-karmada-host
name: karmada-host
current-context: karmada-host
kind: Config
preferences: {}
users:
- name: kind-karmada-host
user:
client-certificate-data: <...>
client-key-data: <...>
kind: Cluster
apiVersion: "kind.x-k8s.io/v1alpha4"
networking:
apiServerAddress: "172.17.132.208"
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 5443
hostPort: 5443
protocol: TCP
listenAddress: "172.17.132.208"
kind: Cluster
apiVersion: "kind.x-k8s.io/v1alpha4"
networking:
apiServerAddress: "172.17.132.208"
podSubnet: "10.10.0.0/16"
serviceSubnet: "10.11.0.0/16"
nodes:
- role: control-plane
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c833b54e7da kindest/node:v1.26.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 172.17.132.208:5443->5443/tcp, 172.17.132.208:37581->6443/tcp karmada-host-control-plane
e635ab3e90a6 kindest/node:v1.26.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 172.17.132.208:35945->6443/tcp member2-control-plane
34d3ecfbd393 kindest/node:v1.26.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:34639->6443/tcp member3-control-plane
1a936aab237c kindest/node:v1.26.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 172.17.132.208:33489->6443/tcp member1-control-plane
[
{
"Name": "kind",
"Id": "e16befca91afc2553f2cffb046911647d9a090125cc1538b4f02c509232fce38",
"Created": "2022-08-03T09:17:50.8251538Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
},
{
"Subnet": "fc00:f853:ccd:e793::/64",
"Gateway": "fc00:f853:ccd:e793::1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0c833b54e7daf5734ad172c5df43a439ec7233602e95ed01b6657ddbe27bfb9c": {
"Name": "karmada-host-control-plane",
"EndpointID": "cee272750d6fe4780519e98b6d5c02ba4af2639ff19aaae6a2c347c1ec293496",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": "fc00:f853:ccd:e793::5/64"
},
"1a936aab237c7edf617dfe5471812a98e4a7801eb494ea9cd393264a9bff82a2": {
"Name": "member1-control-plane",
"EndpointID": "1d5b98597be4c53987dd5e3dd61b16faae103d3d37cfd9bbc7a75b0590921e7d",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": "fc00:f853:ccd:e793::2/64"
},
"34d3ecfbd39370ad134f7b69670e3509a58c1eb7c6c4a8833bc26041d7180401": {
"Name": "member3-control-plane",
"EndpointID": "500f2ca9e0134635eda2faa65d531e0a6412ddf4931350fd8182043b09300eb5",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": "fc00:f853:ccd:e793::3/64"
},
"e635ab3e90a6f6d58da588507dd42c2e773dc5eee653d4f5f8ac97b3bb1248d0": {
"Name": "member2-control-plane",
"EndpointID": "6f8db76ccd499eb3b4e2304d9265654eec5f3b84733f8e0e8d38313e8c2670cd",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": "fc00:f853:ccd:e793::4/64"
}
},
"Options": {
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Waiting for ok.......................................... ### <== waiting cluster
Do you mean the script hanging there forever?
From the output of docker ps, I can see the kind cluster is running and up, but please echo the output of any one of the following logs here.
Creating cluster karmada-host and the log file is in /tmp/karmada/karmada-host.log Creating cluster member1 and the log file is in /tmp/karmada/member1.log Creating cluster member2 and the log file is in /tmp/karmada/member2.log Creating cluster member3 and the log file is in /tmp/karmada/member3.log
@RainbowMango Hi!
Waiting for ok.......................................... ### <== waiting cluster
Do you mean the script hanging there forever?
yes
From the output of docker ps, I can see the kind cluster is running and up, but please echo the output of any one of the following logs here.
Creating cluster karmada-host and the log file is in /tmp/karmada/karmada-host.log Creating cluster member1 and the log file is in /tmp/karmada/member1.log Creating cluster member2 and the log file is in /tmp/karmada/member2.log Creating cluster member3 and the log file is in /tmp/karmada/member3.log
Deleting cluster "karmada-host" ...
Creating cluster "karmada-host" ...
• Ensuring node image (kindest/node:v1.26.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.26.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing CNI 🔌 ...
✓ Installing CNI 🔌
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-karmada-host"
You can now use your cluster with:
kubectl cluster-info --context kind-karmada-host --kubeconfig /home/mzc/.kube/karmada.config
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Deleting cluster "member1" ...
Creating cluster "member1" ...
• Ensuring node image (kindest/node:v1.26.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.26.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing CNI 🔌 ...
✓ Installing CNI 🔌
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-member1"
You can now use your cluster with:
kubectl cluster-info --context kind-member1 --kubeconfig /home/mzc/.kube/members.config
Deleting cluster "member2" ...
Creating cluster "member2" ...
• Ensuring node image (kindest/node:v1.26.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.26.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing CNI 🔌 ...
✓ Installing CNI 🔌
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-member2"
You can now use your cluster with:
kubectl cluster-info --context kind-member2 --kubeconfig /home/mzc/.kube/members.config
Deleting cluster "member3" ...
Creating cluster "member3" ...
• Ensuring node image (kindest/node:v1.26.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.26.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing CNI 🔌 ...
✓ Installing CNI 🔌
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-member3"
You can now use your cluster with:
kubectl cluster-info --context kind-member3 --kubeconfig /home/mzc/.kube/members.config
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
WSL2's GOOS is 'linux' but the docker network in wsl2 is similar to macOS. util::check_clusters_ready() override kubeconfig's $HOST_IP
Yeah, I think you find the root cause. The relevant code is: https://github.com/karmada-io/karmada/blob/a9089325dcbec52e30b00f71ac402ccb6fba29fb/hack/util.sh#L459-L464
Since $os_name
is linux
, but it is expected to get the docker IP according to darwin
.
I don't know how to support WSL2 yet. Do you have any ideas?
WSL2's GOOS is 'linux' but the docker network in wsl2 is similar to macOS. util::check_clusters_ready() override kubeconfig's $HOST_IP
Yeah, I think you find the root cause. The relevant code is:
Since
$os_name
islinux
, but it is expected to get the docker IP according todarwin
.I don't know how to support WSL2 yet. Do you have any ideas?
I modify util.sh and re run and it works
case $os_name in
#linux) container_ip_port=$(util::get_docker_native_ipaddress "${context_name}-control-plane")":6443"
#;;
linux) container_ip_port=$(util::get_docker_host_ip_port "${context_name}-control-plane")
;;
darwin) container_ip_port=$(util::get_docker_host_ip_port "${context_name}-control-plane")
;;
*)
hack/local-up-karmada.sh `hostname -I`
Is it correct to overwrite karmada.config when there is GOOS=linux and HOST_IPADDRESS inputed?
I'm not sure this patch code could be accepted by upstream, but I'm glad that it works for you.
Is it correct to overwrite karmada.config when there is GOOS=linux and HOST_IPADDRESS inputed?
I don't know, since it can help you to setup the environment to try on Karmada, from this point of view it's great.
After changing my docker networking options it works.
## Change wsl.conf configuration
echo -e "[network]\ngenerateResolvConf = false" | sudo tee -a /etc/wsl.conf
## Setting to prevent automatic creation of resolv.conf
sudo unlink /etc/resolv.conf
# Add DNS nameserver
echo nameserver 1.1.1.1 | sudo tee /etc/resolv.conf
## Change Docker network configuration
sudo update-alternatives --config iptables
# and select 'iptables-legacy'
# Finally, run 'hack/local-up-karmada.sh'
What happened: When deploying the built karmada image to the cluster, the deployment fails due to connection timeout.
What you expected to happen: karmada deployment success in wsl2 + docker local environment
How to reproduce it (as minimally and precisely as possible): on WLS2 Terminal
hack/local-up-karmada.sh
orhack/local-up-karmada.sh 'hostname -I'
Anything else we need to know?:
util::check_clusters_ready()
override kubeconfig's $HOST_IPEnvironment:
kubectl-karmada version
orkarmadactl version
):WLS2 Version
Docker Desktop Version