kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.1k stars 1.51k forks source link

could not find a line that matches "Reached target .*Multi-User System.*" #2460

Closed ryanjbaxter closed 2 years ago

ryanjbaxter commented 2 years ago

We started receiving this error recently when creating a Kind cluster on CircleCI.

https://app.circleci.com/pipelines/github/spring-cloud/spring-cloud-kubernetes/1008/workflows/c6d1d226-2028-41ed-a5d8-d0c912f3e0d1/jobs/2090?invite=true#step-108-220

I am pretty sure it has to do with this change

https://github.com/kubernetes-sigs/kind/pull/2421

Any ideas what we can do to make Kind work on CircleCI?

Right now I have reverted to using the 0.11.1 release for now.

AkihiroSuda commented 2 years ago

https://app.circleci.com/pipelines/github/spring-cloud/spring-cloud-kubernetes/1008/workflows/c6d1d226-2028-41ed-a5d8-d0c912f3e0d1/jobs/2090?invite=true#step-108-220

Can't see the logs.

Could you post the full logs of docker logs kind-control-plane?

ryanjbaxter commented 2 years ago

Sorry about that.

Here are the logs from the build on CircleCI

+ main
+ install_latest_kind
+ local tmp_dir
++ TMPDIR=/tmp/tmp.4HBinlkgtF
++ mktemp -d /tmp/tmp.4HBinlkgtF/kind-source.XXXXX
+ tmp_dir=/tmp/tmp.4HBinlkgtF/kind-source.Qm5mm
+ cd /tmp/tmp.4HBinlkgtF/kind-source.Qm5mm
+ git clone https://github.com/kubernetes-sigs/kind
Cloning into 'kind'...
Warning: Permanently added the RSA host key for IP address '140.82.113.4' to the list of known hosts.
remote: Enumerating objects: 23489, done.
remote: Counting objects: 100% (635/635), done.
remote: Compressing objects: 100% (329/329), done.
remote: Total 23489 (delta 312), reused 530 (delta 255), pack-reused 22854
Receiving objects: 100% (23489/23489), 11.69 MiB | 17.30 MiB/s, done.
Resolving deltas: 100% (13297/13297), done.
+ cd ./kind
+ make install INSTALL_DIR=/tmp/tmp.4HBinlkgtF
go build -v -o "/tmp/tmp.4HBinlkgtF/kind-source.Qm5mm/kind/bin/kind" -trimpath -ldflags="-buildid= -w -X=sigs.k8s.io/kind/pkg/cmd/kind/version.GitCommit=4910c3e221a858e68e29f9494170a38e1c4e8b80"
install -d /tmp/tmp.4HBinlkgtF
install "/tmp/tmp.4HBinlkgtF/kind-source.Qm5mm/kind/bin/kind" "/tmp/tmp.4HBinlkgtF/kind"
+ cd /home/circleci/project/spring-cloud-kubernetes-integration-tests
+ /tmp/tmp.4HBinlkgtF/kind create cluster --config=kind-config.yaml -v=2147483647
Creating cluster "kind" ...
DEBUG: docker/images.go:58] Image: kindest/node:v1.22.10@sha256:7f539328bebb0483e4a91ae48fcd067619cd9fa28f4bf3b3d624858e30571a8e present locally
 βœ“ Ensuring node image (kindest/node:v1.22.10) πŸ–Ό
 βœ— Preparing nodes πŸ“¦ πŸ“¦
ERROR: failed to create cluster: could not find a line that matches "Reached target .*Multi-User System.*"
Stack Trace:
sigs.k8s.io/kind/pkg/errors.Wrap
    sigs.k8s.io/kind/pkg/errors/errors.go:47
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:90
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:55
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.1.3/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.1.3/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.1.3/command.go:897
sigs.k8s.io/kind/cmd/kind/app.Run
    sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
    sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
    sigs.k8s.io/kind/main.go:25
runtime.main
    runtime/proc.go:255
runtime.goexit
    runtime/asm_amd64.s:1581
+ cleanup
+ /tmp/tmp.4HBinlkgtF/kind delete cluster
Deleting cluster "kind" ...
+ rm -rf /tmp/tmp.4HBinlkgtF

docker logs kind-control-plane fails to produce anything because the Kind cluster fails to start.

AkihiroSuda commented 2 years ago

docker logs kind-control-plane fails to produce anything because the Kind cluster fails to start.

kind create cluster --retain retains the container (and its logs)

ryanjbaxter commented 2 years ago

Not really much there

$ docker logs kind-control-plane
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
BenTheElder commented 2 years ago

usually when I see "exec format error" that means the architecture is wrong.

DEBUG: docker/images.go:58] Image: kindest/node:v1.22.10@sha256:7f539328bebb0483e4a91ae48fcd067619cd9fa28f4bf3b3d624858e30571a8e present locally βœ“ Ensuring node image (kindest/node:v1.22.10) πŸ–Ό

This is curious, that's not a real version?

ryanjbaxter commented 2 years ago

Hmm not sure how we got that version but I switched it to kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 and tried again. It still failed but more details.

Creating cluster "kind" ...
DEBUG: docker/images.go:67] Pulling image: kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 ...
 βœ“ Ensuring node image (kindest/node:v1.22.0) πŸ–Ό
 βœ— Preparing nodes πŸ“¦ πŸ“¦
ERROR: failed to create cluster: could not find a line that matches "Reached target .*Multi-User System.*"
Stack Trace:
sigs.k8s.io/kind/pkg/errors.Wrap
    sigs.k8s.io/kind/pkg/errors/errors.go:47
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:90
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
    sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:55
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.1.3/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.1.3/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.1.3/command.go:897
sigs.k8s.io/kind/cmd/kind/app.Run
    sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
    sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
    sigs.k8s.io/kind/main.go:25
runtime.main
    runtime/proc.go:255
runtime.goexit
    runtime/asm_amd64.s:1581
circleci@default-185ee2f2-6911-4e9a-8158-319dcf5eff1d:~/project/spring-cloud-kubernetes-integration-tests$ nano kind-config.yaml
circleci@default-185ee2f2-6911-4e9a-8158-319dcf5eff1d:~/project/spring-cloud-kubernetes-integration-tests$ docker logs kind-control-plane
INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v1
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
INFO: Detected IPv4 address: 172.18.0.2
INFO: Detected IPv6 address: ::ffff:172.18.0.2
aojea commented 2 years ago

maybe 30 seconds is too short for these environments? they use to be very constrained

https://github.com/kubernetes-sigs/kind/blob/4910c3e221a858e68e29f9494170a38e1c4e8b80/pkg/cluster/internal/providers/common/provision.go#L81

AkihiroSuda commented 2 years ago

https://github.com/kubernetes-sigs/kind/issues/2460#issuecomment-921020265

β€œexec format error” implies that the timeout is probably unrelated.

ryanjbaxter commented 2 years ago

So it https://github.com/kubernetes-sigs/kind/pull/2421 not related to this problem?

BenTheElder commented 2 years ago

Seems unlikely, exec format error suggests for example that the CPU architecture is wrong.

Can you please share more about the environment?

ryanjbaxter commented 2 years ago

@BenTheElder this repo should allow you to reproduce the problem on CircleCi https://github.com/ryanjbaxter/kind-2460

If you fork it and set it up to be built by CircleCi you can reproduce the problem

BenTheElder commented 2 years ago

Thanks, we have a repo for this actually https://kind.sigs.k8s.io/docs/user/resources/#using-kind-in-ci

https://github.com/kind-ci/examples

To be quite honest I don't think I'm likely to have time to play with this soon, I do highly recommend sticking to stable releases of kind, FWIW, unless you are developing Kubernetes itself.

Most of our open source users are either using Kubernetes's own Kubernetes based CI (prow) or GitHub actions, both of which we do test in this repo and try to keep functional, but I can't justify focusing on more @ work (not even currently workin on kind, just maintaining) and on my own time I'm mostly not touching work related things at the moment for sanity πŸ™ƒ

Can you confirm that v0.11.1 is fine with the image we've pre-built and published for v0.11.1? https://github.com/kubernetes-sigs/kind/releases/tag/v0.11.1 kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047

wind57 commented 2 years ago

yeah, I can confirm that v0.11.1 is working just fine and this is how I fixed it with Ryan. I guess we will stick to stable releases from now on.

BenTheElder commented 2 years ago

cc @aojea if you have time to look prior to any upcoming release πŸ™

aojea commented 2 years ago

/assign

aojea commented 2 years ago

This is interesting, I have a job that runs nightly in circleci, it builds nighlty kind images and creates a cluster as smoke test. There is no issues with that job https://app.circleci.com/pipelines/github/aojea/kind-images?branch=master :thinking:

aojea commented 2 years ago

reproduced :smile: https://app.circleci.com/pipelines/github/aojea/kind-2460/1/workflows/1af217f9-365d-43b1-a9a6-61a07fe2603b/jobs/1

aojea commented 2 years ago

ok, here is the problem, in circleci with docker version

docker version
Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:38 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce

there are no systemd logs in the docker logs output

docker logs 7a8496a1fd5d                                                                                                                                                                 
INFO: ensuring we can execute mount/umount even with userns-remap                                                                                                                                                                                              
INFO: remounting /sys read-only                                                                                                                                                                                                                                
INFO: making mounts shared                                                                                                                                                                                                                                     
INFO: detected cgroup v1                                                                                                                                                                                                                                       
INFO: fix cgroup mounts for all subsystems                                                                                                                                                                                                                     
INFO: clearing and regenerating /etc/machine-id                                                                                                                                                                                                                
Initializing machine ID from random generator.                                                                                                                                                                                                                 
INFO: faking /sys/class/dmi/id/product_name to be "kind"                                                                                                                                                                                                       
INFO: faking /sys/class/dmi/id/product_uuid to be random                                                                                                                                                                                                       
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well                                                                                                                                                                                                  
INFO: setting iptables to detected mode: legacy                                                                                                                                                                                                                
INFO: Detected IPv4 address: 172.18.0.2                                                                                                                                                                                                                        
INFO: Detected IPv6 address: ::ffff:172.18.0.2                                                                                                                                                                                                                 
circleci@default-696c1dec-acf2-4f99-b6bf-10bc08e9416b:~/project/kind$

in my laptop with docker 20.10.8

docker logs d5044cc1a35a                                                                                                                                                                                                                     
INFO: ensuring we can execute mount/umount even with userns-remap                                                                                                                                                                                              
INFO: remounting /sys read-only                                                                                                                                                                                                                                
INFO: making mounts shared                                                                                                                                                                                                                                     
INFO: detected cgroup v1                                                                                                                                                                                                                                       
INFO: fix cgroup mounts for all subsystems                                                                                                                                                                                                                     
INFO: clearing and regenerating /etc/machine-id                                                                                                                                                                                                                
Initializing machine ID from random generator.                                                                                                                                                                                                                 
INFO: faking /sys/class/dmi/id/product_name to be "kind"                                                                                                                                                                                                       
INFO: faking /sys/class/dmi/id/product_uuid to be random                                                                                                                                                                                                       
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well                                                                                                                                                                                                  
INFO: setting iptables to detected mode: nft                                                                                                                                                                                                                   
INFO: Detected IPv4 address: 172.18.0.4                                                                                                                                                                                                                        
INFO: Detected IPv6 address: fc00:f853:ccd:e793::4                                                                                                                                                                                                             
systemd 248.3-1ubuntu3 running in system mode. (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS -OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP -LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4
 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=hybrid)                                                                                                                                                                                          
Detected virtualization docker.                                                                                                                                                                                                                                
Detected architecture x86-64.                                                                                                                                                                                                                                  

Welcome to Ubuntu Impish Indri (development branch)!                                                                                                                                                                                                           

Queued start job for default target Graphical Interface.                                                                                                                                                                                                       
[  OK  ] Created slice system-modprobe.slice.                                                                                                                                                                                                                  
[  OK  ] Started Dispatch Password …ts to Console Directory Watch.                                                                                                                                                                                             
[  OK  ] Set up automount Arbitrary…s File System Automount Point.                                                                                                                                                                                             
[  OK  ] Reached target Local Encrypted Volumes.                                                                                                                                                                                                               
[  OK  ] Reached target Paths.                                                                                                                                                                                                                                 
[  OK  ] Reached target Slices.                           
aojea commented 2 years ago

why are not journald logs showing up on docker logs , maybe this entries are related?

Sep 29 07:45:08 kind-control-plane systemd[1]: systemd-journal-flush.service: Failed to set 'blkio.weight' attribute on '/docker/7a8496a1fd5d0c7e69d71a695abb3b2707a79e2dd59537a52b7d38fa2d3b288e/system.slice/systemd-journal-flush.service' to 'default 500':
 Invalid argument
Sep 29 07:45:08 kind-control-plane systemd-sysctl[179]: Couldn't write '1' to 'fs/protected_fifos', ignoring: No such file or directory
Sep 29 07:45:08 kind-control-plane systemd-sysctl[179]: Couldn't write '2' to 'fs/protected_regular', ignoring: No such file or directory

@AkihiroSuda does this ring a bell?

aojea commented 2 years ago

I suspect that the problem is with the logging of systemd, circleci has

root@kind-control-plane:/# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-96-generic root=UUID=bed15000-f4d1-4cc1-aa58-dd468539785f ro scsi_mod.use_blk_mq=Y console=ttyS0

and the container inherits the server config

aojea commented 2 years ago

some systemd expert to log always the boot checks to the docker logs ? or should we just not fail hard on the check?

aojea commented 2 years ago

I got a fix :)

aojea commented 2 years ago

@BenTheElder this repo should allow you to reproduce the problem on CircleCi https://github.com/ryanjbaxter/kind-2460

If you fork it and set it up to be built by CircleCi you can reproduce the problem

the job works now from master https://app.circleci.com/pipelines/github/aojea/kind-2460/2/workflows/736bf7b4-f454-48c9-951e-6ee6291e9f98/jobs/4

ryanjbaxter commented 2 years ago

Thanks @aojea, my simple test that reproduced the problem now works!

wind57 commented 2 years ago

NICE! thx a lot!

BenTheElder commented 2 years ago

Thanks @aojea <3

chandrareddyp commented 1 year ago

Does anyone know how to go back from kind version 0.14.0 to kind version 0.11.1?

BenTheElder commented 1 year ago

@chandrareddyp it depends on how you installed kind, but basically you'll want to delete your clusters and install the new version.

For many of the options here you can use the same instructions but replace the version number https://kind.sigs.k8s.io/docs/user/quick-start/#installation

To discuss this tangent further, please file a new support issue with more details, or try our slack https://github.com/kubernetes-sigs/kind#community

Alternatively, if you could share more about your environment in which you are hitting issues with v0.14.0, that would be helpful for us, this particular issue should not be happening on v0.14.0

sailinnthu commented 1 year ago

I'm still getting this error while I try to create more than 1 kind cluster on Ubuntu 22.04. I can only create the first cluster with success. $ kind create cluster --config mgmt-kindconfig-v124.yaml Creating cluster "gm-mgmt" ... βœ“ Ensuring node image (kindest/node:v1.24.0) πŸ–Ό βœ— Preparing nodes πŸ“¦ πŸ“¦ πŸ“¦ πŸ“¦ ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

the environment. $ kind version kind v0.14.0 go1.18.2 linux/amd64

$ docker version | grep Version Version: 20.10.17 Version: 20.10.17 Version: 1.6.7 Version: 1.1.3 Version: 0.19.0

$ uname -a Linux system76 5.17.15-76051715-generic #202206141358~1655919116~22.04~1db9e34~dev-Ubuntu SMP PREEMPT Fr x86_64 x86_64 x86_64 GNU/Linux

BenTheElder commented 1 year ago

https://github.com/kubernetes-sigs/kind/pull/2478#issuecomment-1214656908

dhaiducek commented 1 year ago

@sailinnthu For me, when I specify an image version directly (even the same image it says it's using) like --image=kindest/node:v1.24.0, everything works fine. As soon as I run kind create cluster (or specify the image SHA associated with the v0.14.0 Kind version), the message reappears. Hopefully that might help in your case!

Alevsk commented 1 year ago

@dhaiducek the only solution for me is to restart my machine and then try to create the cluster again :/, I'm able to reproduce this issue by first creating a cluster, then deleting it and then try to create it again.

aojea commented 1 year ago

@dhaiducek the only solution for me is to restart my machine and then try to create the cluster again :/, I'm able to reproduce this issue by first creating a cluster, then deleting it and then try to create it again.

do you reproduce it always? what are your setup details @Alevsk ?

Andrioden commented 1 year ago

Also seeing this problem after deleting the cluster. Windows, wsl, docker desktop.

Restarting my machine solves it.

PS C:\Projects\NHN\HN-Oppsett> kind create cluster --config kubernetes/cluster/kind-config.yaml
Creating cluster "kind" ...
 β€’ Ensuring node image (kindest/node:v1.25.0) πŸ–Ό  ...

Command Output: Error response from daemon: i/o timeout

-------------------

PS C:\Projects\NHN\HN-Oppsett> kind create cluster --config kubernetes/cluster/kind-config.yaml
Creating cluster "kind" ...
 β€’ Ensuring node image (kindest/node:v1.25.0) πŸ–Ό  ...
 βœ“ Ensuring node image (kindest/node:v1.25.0) πŸ–Ό
 β€’ Preparing nodes πŸ“¦   ...
 βœ— Preparing nodes πŸ“¦
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

------------------- 

PS C:\Projects\NHN\HN-Oppsett> kind create cluster --config kubernetes/cluster/kind-config.yaml
Creating cluster "kind" ...
 β€’ Ensuring node image (kindest/node:v1.25.0) πŸ–Ό  ...
 βœ“ Ensuring node image (kindest/node:v1.25.0) πŸ–Ό
 β€’ Preparing nodes πŸ“¦   ...
 βœ— Preparing nodes πŸ“¦
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

... repeating
Aryan-Deshpande commented 1 year ago

I noticed the same result, @Andrioden except whilst restarting I got the same error and surprisingly I have 3 containers.

After I checked for existing clusters on the command line using "kind get clusters", there were no clusters found while 3 containers already are in execution. image

Aryan-Deshpande commented 1 year ago

here is my config.yaml used when creating those clusters image

BenTheElder commented 1 year ago

Again. https://github.com/kubernetes-sigs/kind/pull/2478#issuecomment-1214656908

This is a symptom, not a root cause. Please file a detailed bug report so we can follow-up. The root cause for this issue was already resolved.

dxps commented 1 year ago

@BenTheElder Sure, but unfortunately I don't get any further details:

❯ kind create cluster -n istioinaction --config istioinaction_cluster_config --retain; kind export logs; kind delete cluster
Creating cluster "istioinaction" ...
 βœ“ Ensuring node image (kindest/node:v1.25.2) πŸ–Ό
 βœ— Preparing nodes πŸ“¦  
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
ERROR: unknown cluster "kind"
Deleting cluster "kind" ...
❯ 

The cluster definition seem to be created:

❯ kind get clusters 
dxps-cluster
istioinaction
❯

without any nodes within (what exists belongs to the other cluster):

❯ k get nodes
NAME                         STATUS   ROLES           AGE   VERSION
dxps-cluster-control-plane   Ready    control-plane   18d   v1.25.2
dxps-cluster-worker          Ready    <none>          18d   v1.25.2
dxps-cluster-worker2         Ready    <none>          18d   v1.25.2
❯ 

Using kind v0.16.0 on Pop!_OS 22.04 LTS, and having already a multi-node cluster running.

And basically I'm trying to setting up an ingress controller, so that istioinaction_cluster_config file contains:

❯ cat istioinaction_cluster_config 
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP

❯

Deleted the existing cluster(s) and this time the creation succedded:

❯ kind create cluster -n istioinaction --config istioinaction_cluster_config --retain; kind export logs; kind delete cluster
Creating cluster "istioinaction" ...
 βœ“ Ensuring node image (kindest/node:v1.25.2) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦  
 βœ“ Writing configuration πŸ“œ 
 βœ“ Starting control-plane πŸ•ΉοΈ 
 βœ“ Installing CNI πŸ”Œ 
 βœ“ Installing StorageClass πŸ’Ύ 
Set kubectl context to "kind-istioinaction"
You can now use your cluster with:

kubectl cluster-info --context kind-istioinaction

Have a nice day! πŸ‘‹
❯ 
stmcginnis commented 1 year ago

unfortunately I don't get any further details

Check on the local filesystem. The kind export logs command should have written out more detailed logs.

stmcginnis commented 1 year ago

Also, this is a closed issue, so if folks are experiencing current failures it would be good to file a new issue to track those down.

BenTheElder commented 1 year ago

kind export logs will require --name=istioinaction since your cluster was named when created. The name flag should be used consistently across cluster commands (or else the environment variable)

BenTheElder commented 1 year ago

Folks, please file your own bugs with the bug issue template including all environment details it asks. I will ensure subsequent bugs link back here.

These additional discussions are NOT the original issue and are difficult to track interwoven like this in an old closed issue, and these comments are missing all of the bug issue template details that help us understand where specifically this is occurring.

I'm going to lock this now. The original issue is solved. Other new bugs with the same symptom (semi-specific error message) require fresh details and root-causing. If you're still seeing this error message with a current KIND release please file an updated issue and we'll investigate