knative / serving

Kubernetes-based, scale-to-zero, request-driven compute
https://knative.dev/docs/serving/
Apache License 2.0
5.46k stars 1.14k forks source link

Error in Knative GitHub codespaces environment #14537

Open aliok opened 8 months ago

aliok commented 8 months ago

What version of Knative?

0.11.x

Expected Behavior

When I click the "Open in GitHub Codespaces" button in https://github.com/knative/serving/blob/main/DEVELOPMENT.md#getting-started-with-github-codespaces, I see there's an error in the terminal.

Actual Behavior

No errors should happen.

This is something we should be leveraging to direct any folks who are beginners in Knative.

Steps to Reproduce the Problem

Click the "Open in GitHub Codespaces" button in https://github.com/knative/serving/blob/main/DEVELOPMENT.md#getting-started-with-github-codespaces.

Wait a bit.

Check the GitHub Codespaces: Details in Terminal.

Screenshot 2023-10-17 at 16 59 33

You will see this error:

ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
[49908 ms] postCreateCommand failed with exit code 1. Skipping any further user-provided commands.

Error: Command failed: /bin/sh -c bash .devcontainer/setup.sh
    at OY (/usr/lib/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:235:130)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
{"outcome":"error","message":"Command failed: /bin/sh -c bash .devcontainer/setup.sh","description":"The postCreateCommand in the devcontainer.json failed.","containerId":"dc8e38e7f6a861e4cd0c7d33be1c7c7872856de4e6fdb2f32cdf19a55eff47ec"}
    at async Nl (/usr/lib/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:4393)
    at async Rl (/usr/lib/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:3738)
aliok commented 8 months ago

/good-first-issue

knative-prow[bot] commented 8 months ago

@aliok: This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to [this](https://github.com/knative/serving/issues/14537): >/good-first-issue Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
aliok commented 8 months ago

/hold

Codespaces has some issues and we're still evaluating how useful it is: https://github.com/knative/serving/issues/13806#issuecomment-1544718993

aliok commented 8 months ago

Related: https://github.com/knative/community/issues/1295

If someone wants to give it a try and solve issues, then we're fine with it.

Vandit-dev commented 8 months ago

Hello @aliok ,

Currently, No errors are happening.

image image

izabelacg commented 3 months ago

I've just ran into this error. The IDE opens fine but if you look at the steps in the screenshot above, it shows the postCreateCommand fails... Looking at the full logs further, it seems the kind cluster is not up at all.

davidpechcz commented 2 months ago

Problem from logs:

 ✗ Preparing nodes 📦 
2024-04-16 13:15:18.045Z: Deleted nodes: ["kind-control-plane"]2024-04-16 13:15:18.054Z: 
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
2024-04-16 13:15:18.109Z: postCreateCommand failed with exit code 1. Skipping any further user-provided commands.

2024-04-16 13:15:18.115Z: Error: Command failed: /bin/sh -c bash .devcontainer/setup.sh
2024-04-16 13:15:18.118Z:     at wY (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:235:130)
2024-04-16 13:15:18.123Z:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2024-04-16 13:15:18.127Z:     at async nl (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:4393)
2024-04-16 13:15:18.138Z:     at async tl (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:3738)
2024-04-16 13:15:18.143Z:     at async rl (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:2942)
2024-04-16 13:15:18.147Z:     at async fs (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:227:2386)
2024-04-16 13:15:18.154Z:     at async q$ (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:464:1488)
2024-04-16 13:15:18.159Z:     at async iK (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:464:960)
2024-04-16 13:15:18.170Z:     at async gAA (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:481:3660)
2024-04-16 13:15:18.181Z:     at async BC (/.codespaces/agent/bin/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:481:4775)
2024-04-16 13:15:18.188Z: {"outcome":"error","message":"Command failed: /bin/sh -c bash .devcontainer/setup.sh","description":"The postCreateCommand in the devcontainer.json failed.","containerId":"1c6a9049243415919e9e12823faeb57747e1cbead715647d32618e4b1caa96dc"}
2024-04-16 13:15:18.194Z: devcontainer process exited with exit code 1

Full log enclosed. codespaces.log

Using --retain parameter for kind cluster create I was able to salvage this from not-starting kindest:node container:

INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v2
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
INFO: Detected IPv4 address: 172.18.0.2
INFO: Detected IPv6 address: fc00:f853:ccd:e793::2
systemd 249.11-0ubuntu3.7 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Detected virtualization docker.
Detected architecture x86-64.

Welcome to Ubuntu 22.04.2 LTS!

Failed to create /init.scope control group: Operation not supported
Failed to allocate manager object: Operation not supported
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...
INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v2
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
INFO: Detected IPv4 address: 172.18.0.2
INFO: Detected old IPv4 address: 172.18.0.2
INFO: Detected IPv6 address: fc00:f853:ccd:e793::2
INFO: Detected old IPv6 address: fc00:f853:ccd:e793::2
systemd 249.11-0ubuntu3.7 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Detected virtualization docker.
Detected architecture x86-64.

Welcome to Ubuntu 22.04.2 LTS!

Failed to create /init.scope control group: Operation not supported
Failed to allocate manager object: Operation not supported
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

I tried to shuffle around with Kind version, but even the v1.29.2 images have similar problem:

INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v2
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
INFO: detected IPv4 address: 172.18.0.2
INFO: detected old IPv4 address: 172.18.0.2
INFO: detected IPv6 address: fc00:f853:ccd:e793::2
INFO: detected old IPv6 address: fc00:f853:ccd:e793::2
INFO: starting init
systemd 252.22-1~deb12u1 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Detected virtualization docker.
Detected architecture x86-64.

Welcome to Debian GNU/Linux 12 (bookworm)!

Failed to create /init.scope control group: Structure needs cleaning
Failed to allocate manager object: Structure needs cleaning
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

Might be related to https://github.com/moby/moby/issues/42275