kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.51k stars 14.47k forks source link

Umbrella issue for Katacoda removal #33936

Closed reylejano closed 1 year ago

reylejano commented 2 years ago

Problem

(The public part of) Katacoda shut down on June 15, 2022 https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html

The remaining part of Katacoda, that Kubernetes uses, is due to shut down ~late 2022~ early 2023.

Related to https://github.com/kubernetes/website/issues/33918 and https://github.com/kubernetes/website/issues/38785

Discussion

Use the GitHub Discussion to discuss what we should replace Katacoda with.

Specific steps

SIG Docs members propose to edit pages that use Katacoda to mark that the sandbox is unavailable. When an alternative is available then update the affected pages to use the alternative.

Pages to Update: https://kubernetes.io/docs/tutorials/hello-minikube/ https://kubernetes.io/docs/tutorials/kubernetes-basics/ https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-interactive/ https://kubernetes.io/docs/tutorials/configuration/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/ https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/ https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-removal

Additional Information

/remove-kind bug /priority important-soon

reylejano commented 2 years ago

/triage accepted

Debanitrkl commented 2 years ago

https://github.com/kubernetes-sigs/contributor-katacoda It's the place where we work on katacoda at Contribex.

nitishfy commented 2 years ago

Hey @reylejano Do we have to remove all these pages you've mentioned in the Pages to Update

reylejano commented 2 years ago

@Debanitrkl mentioned on Slack that SIG ContribEx is looking at killercoda

@shannonxtreme mentioned turning the katacoda tutorials to use minikube

Do we as SIG Docs want to continue to maintain these tutorials if they use killercoda or minikube? Should we turn the tutorials into instructions to use on any Kubernetes cluster?

shannonxtreme commented 2 years ago

@Debanitrkl mentioned on Slack that SIG ContribEx is looking at killercoda

@shannonxtreme mentioned turning the katacoda tutorials to use minikube

Do we as SIG Docs want to continue to maintain these tutorials if they use killercoda or minikube? Should we turn the tutorials into instructions to use on any Kubernetes cluster?

Imo we can direct users to playgrounds or minikube or whatever (maybe a standard before you begin that says set up a Kubernetes cluster using an option such as X, Y, or Z) and keep our responsibilities scoped to platform agnostic instructions.

I just enjoyed the interactive tutorials, but maintaining their compatibility and maintenance with platform X shouldn't be on SIG Docs

afbjorklund commented 2 years ago

As far as I know, the katacoda examples are using minikube at the moment ?

$ start.sh
Starting Kubernetes...minikube version: v1.18.0
commit: ec61815d60f66a6e4f6353030a40b12362557caa-dirty
* minikube v1.18.0 on Ubuntu 18.04 (amd64)
* Using the none driver based on existing profile

An older version (1.18 vs 1.25), and run from a control plane terminal, but anyway.


Note: the "none" driver of minikube is basically just a wrapper around kubeadm

It also deploys the kubernetes dashboard, which is otherwise equally painful to set up.

shannonxtreme commented 2 years ago

Yeah they are using minikube so we'd basically be directing the user to use minikube wherever they want, instead of using the katacoda playground. It'll be less directly interactive sadly

afbjorklund commented 2 years ago

I added some notes and links, on how to improve the minikube "none" driver for this scenario.

EDIT: also added an example on how to use the "docker" driver for setting up two fakenodes

Providing the virtual machine and the web console for it would be up to the "learning platform".

afbjorklund commented 2 years ago

The jupyter notebooks are quite nice.

https://github.com/kubernetes-client/python/blob/master/examples/notebooks/README.md

Unfortunately they use python, not kubectl...

https://github.com/kubernetes-client/python/blob/master/examples/notebooks/create_deployment.ipynb

sftim commented 2 years ago

I suggest that we change the page templates so that any page using a Katacoda shortcode gets a removal warning, to explain that Katacoda is shutting down.

https://gohugo.io/templates/shortcode-templates/#checking-for-existence explains how to write a conditional based on the presence of a shortcode.

bwplotka commented 2 years ago

FYI: We use Katacoda heavily for the Thanos project, and we discussed some alternatives to consider - but definitely not in so short timespan: https://github.com/thanos-io/thanos/issues/5385

TL;DR https://killercoda.com/ (mentioned already) or https://instruqt.com/

adamwitwer commented 2 years ago

Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site. My apologies for the miscommunication. @reylejano I've sent you an email with more context.

Debanitrkl commented 2 years ago

Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site. My apologies for the miscommunication. @reylejano I've sent you an email with more context.

Does that mean even this is supported, because it also comes under Kubernetes upstream but not directly linked to kubernetes.io

sftim commented 2 years ago

Does that mean even this is supported, because it also comes under Kubernetes upstream but not directly linked to kubernetes.io

https://katacoda.com/k8scontributors will not survive the shutdown of https://katacoda.com/ and does need to be migrated. I recommend @Debanitrkl that you file a separate issue elsewhere, for SIG ContribEx to track.

sftim commented 2 years ago

/remove-priority important-soon

We need to decide what priority to set for this.

sftim commented 2 years ago

@reylejano I took the liberty of updating the issue description to strike through the proposed solution. Katacoda for Kubernetes is still there.

utkarsh-singh1 commented 2 years ago

Hi, i think this https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/ is skipped in the above list. just found this while going through the docs.

RichardoC commented 2 years ago

Is it worth migrating these to https://killercoda.com/ ? Given it's the organisation that already host the CKA/CKS mock exams it might be a good fit?

sftim commented 2 years ago

Is it worth migrating these to https://killercoda.com/ ?

There's two ways to consider that question:

My feeling is that the answers are Yes and Yes respectively. Migrating to something that's not Katacoda might help readers, even with the need to sign up, but the effort it'd require is best spent elsewhere (I have a query to illustrate).

utkarsh-singh1 commented 2 years ago

Hi, I think this https://kubernetes.io/docs/contribute/style/style-guide/#katacoda-embedded-live-environment is skipped in the above list. just found this while going through the docs.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

reylejano commented 2 years ago

/remove-lifecycle stale

sftim commented 2 years ago

Also see https://github.com/kubernetes/website/issues/37817

Ritikaa96 commented 2 years ago

Are we going to fully adapt killercoda ? because some katacoda scenarios have unsupported versions and as it is not maintained further all are bound to have the same someday. Am I right? or there is more to the situation ? Also have to say killercoda has faster loading then katacoda scenarios.

Ritikaa96 commented 2 years ago

This docs in killercoda says developer can use their katacoda repositories as well. So they can Import existing Katacoda scenarios and run them as killercoda scenarios too. It seems workable if this can be done. Anyone tried this? do we have any katacoda author to verify this?

sftim commented 2 years ago

This docs in killercoda says developer can use their katacoda repositories as well. So they can Import existing Katacoda scenarios and run them as killercoda scenarios too. It seems workable if this can be done.

Anyone tried this?

I don't think so.

do we have any katacoda author to verify this?

The team behind Katacoda aren't likely to know the answer, to be honest: killercoda is a separate thing.

wuestkamp commented 2 years ago

It should be possible to port most scenarios from Katacoda to Killercoda, many have done it. But there are also differences and there is a Katacoda Migration Guide for this

Ritikaa96 commented 2 years ago

to be honest: killercoda is a separate thing.

I agree with you there. @wuestkamp I have seen this guide, but haven't tried or seen anyone successfully try this. if it is possible then using killercoda in k8s would also be feasible in all interactive tutorials.

sftim commented 1 year ago

/lifecycle frozen /priority important-soon

JFMeeks commented 1 year ago

Hello, my name is Jesse Meeks and I am the product manager responsible for Katacoda here at O'Reilly and I am happy to put this team in touch with the Katacoda devs on our side in an effort to help you to migrate to Killercoda.

It is correct that the public facing Katacda.com site is not longer supported and it is our intent to take the site down entirely by the end of the year, though this is something Id like to discuss.

Kubernetes.io is the last remaining service running from the public Katacoda hosts and, in fact, has been seeing quite a bit of activity lately. This activity, while fantastic for Kubernetes.io, unfortunately has prompted some scrutiny and increased urgency around finalizing the shut down Katacoda.com. Because Katacoda will only be offered as a private service to O'Reilly customers upon shut down of the Katcoda.com site, there won't be a way for us to embed a publicly accessible Katacoda scenario outside of Oreilly.com, effectively breaking the terminal in your documentation.

As I mentioned, I'd like to be on the same page with regards to a shut down date, I certainly don't want to leave the team in a compromised position. If it is helpful to do so, I am happy to jump on a call or correspond via email: jmeeks@oreilly.com

AlanGreene commented 1 year ago

@JFMeeks Has there been a change in approach since this comment in June that Kubernetes.io would not be affected by the public Katacoda shutdown? https://github.com/kubernetes/website/issues/33936#issuecomment-1145151104

Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site.

Perhaps there's been additional communication since then that's not reflected in this issue.

sftim commented 1 year ago

Perhaps there's been additional communication since then that's not reflected in this issue.

Surely https://github.com/kubernetes/website/issues/33936#issuecomment-1346832443 is that communication.

sftim commented 1 year ago

https://github.com/kubernetes/website/issues/33936#issuecomment-1346832443 was originally posed as https://github.com/kubernetes/website/issues/37817#issuecomment-1341412180; however, this is the better issue to track the removal, now it's confirmed as something we must do.

nikitar commented 1 year ago

I inquired with killercoda about minikube support, but the bigger question is where to migrate in general. I'm happy to help with the work, but I'm not a maintainer, so I'm not really in position to pick the destination. And I imagine there are standard processes (RFC?) to make those sorts of decisions.

nikitar commented 1 year ago

@sftim @afbjorklund What's the process for making those sorts of decisions?

Killercoda folks have no plans of adding minikube to their k8s images, but if it's really necessary they suggest using the ubuntu image and installing minikube manually. Presumably it'd look something like

apt install virtualbox
curl -sL https://path/to/minikube/binary
afbjorklund commented 1 year ago

It is not necessary, as we have debated elsewhere all of minikube and kind and kubeadm are valid ways of installing k8s.

You don't need VirtualBox to run minikube, it is one of the original drivers but it is not even the only hypervisor anymore...

Here is old katacoda config:

$ more .minikube/config/config.json
{
    "ShowBootstrapperDeprecationNotification": false,
    "WantNoneDriverWarning": false,
    "WantReportErrorPrompt": false,
    "WantUpdateNotification": false,
    "driver": "none",
    "kubernetes-version": "v1.20.2"
}

Basically the same as command:

minikube start --driver=none --kubernetes-version=v1.20.2

https://minikube.sigs.k8s.io/docs/start/

`start.sh` command: ```shell echo -n "Starting Kubernetes..." minikube version minikube start --wait=false sleep 2 n=0 until [ $n -ge 10 ] do (minikube addons enable metrics-server && minikube addons enable dashboard) && break n=$[$n+1] sleep 1 done sleep 1 n=0 until [ $n -ge 10 ] do kubectl apply -f /opt/kubernetes-dashboard.yaml &>/dev/null && break n=$[$n+1] sleep 1 done echo "Kubernetes Started" ``` ```yaml apiVersion: v1 kind: Namespace metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/minikube-addons: dashboard name: kubernetes-dashboard selfLink: /api/v1/namespaces/kubernetes-dashboard spec: finalizers: - kubernetes status: phase: Active --- apiVersion: v1 kind: Service metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard-katacoda namespace: kubernetes-dashboard spec: ports: - port: 80 protocol: TCP targetPort: 9090 nodePort: 30000 selector: k8s-app: kubernetes-dashboard type: NodePort ``` I don't know what Katacoda was using, but here is an approximation of the VM setup using Vagrant and VirtualBox: ```ruby Vagrant.configure("2") do |config| config.vm.box = "ubuntu/bionic64" config.vm.provider "virtualbox" do |vb| vb.cpus = "2" vb.memory = "2500" end config.vm.provision "shell", inline: <<-SHELL apt-get update apt-get install -y conntrack curl -sSL https://get.docker.com | sudo sh - sudo usermod -aG docker $SUDO_USER curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb sudo dpkg -i minikube_latest_amd64.deb SHELL end ``` With the minikube configuration from above, then the command is now supposed to look something like this: ```console vagrant@ubuntu-bionic:~$ time minikube start πŸ˜„ minikube v1.28.0 on Ubuntu 18.04 (vbox/amd64) ✨ Using the none driver based on user configuration 🧯 The requested memory allocation of 2200MiB does not leave room for system overhead (total system memory: 2436MiB). You may face stability issues. πŸ’‘ Suggestion: Start minikube with less memory allocated: 'minikube start --memory=2200mb' πŸ‘ Starting control plane node minikube in cluster minikube 🀹 Running on localhost (CPUs=2, Memory=2436MB, Disk=39630MB) ... ℹ️ OS release is Ubuntu 18.04.6 LTS 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.12 ... β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf β–ͺ Generating certificates and keys ... β–ͺ Booting up control plane ... β–ͺ Configuring RBAC rules ... 🀹 Configuring local host environment ... πŸ”Ž Verifying Kubernetes components... β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass πŸ’‘ kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default real 0m29.575s user 0m5.396s sys 0m1.215s ``` There is a bunch of other configuration done in the katacoda image, but I don't have the source code for it ?

The main goal of the "machine" provisioning is to create a machine and provision (install) a container runtime, without having to use a proprietary tool like Docker Desktop. If you already have one, you can instead choose to run nested containers (like kind). Finally, once the machine (nodes) is up and running it will bootstrap the cluster (with kubeadm). The rest is just lifecycle tools (including autopause), and convenience wrappers for features such as image or dashboard.

If you want to do your own provisioning and bootstrapping, then you don't have to use minikube but invent your own.

afbjorklund commented 1 year ago

they suggest using the ubuntu image and installing minikube manually

you don't have to install virtualbox, but you do have to install all the other requirements for it to work

* /etc/modules-load.d/k8s.conf, /etc/sysctl.d/k8s.conf * iptables (>= 1.4.21), iproute2, socat, util-linux, mount, ebtables, ethtool, conntrack * cri-tools * cni-plugins * docker * cri-dockerd

Basically it comes to down to installing all requirements for kubeadm, and also preloading the images ? You would also have to download and install the kubernetes components, but minikube does this for you.

Side note: this will soon also include the CRI and CNI (currently installed as part of the OS, not Kubernetes):

Naturally you also have do all the other cluster configuration, such as CRI and CNI and untainting master etc Like so: https://github.com/lima-vm/lima/blob/v0.14.2/examples/k8s.yaml (here using containerd and flannel)

But it was just easier to do the minikube start ?

Or maybe curl -fsSL https://get.k8s.io | bash

nikitar commented 1 year ago

@afbjorklund Thanks for the information. I'm gonna hold off on looking farther into this until there is a decision on whether/where to migrate the tutorials. (Would love to understand how those kinds of decisions are made, I assume one of the SIGs would be involved)

To be clear, killercoda does offer pre-configured k8s images too (see the 'Environments' section here), they just don't include minikube. kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.

afbjorklund commented 1 year ago

I'm gonna hold off on looking farther into this until there is a decision on whether/where to migrate the tutorials.

I added some new bug reports, on how to improve the appearance of the minikube "none" driver...

Even if it ends up not being used for the kubernetes online tutorials anymore (😿), it is still a bit broken.


Currently it is a bit indecisive about installing on the node (like kubeadm), or in nested containers (like kind) But that seems to apply to all environments, even though running on the control plane is an anti-pattern ?

But in a cloud context, it would be more expensive to have 1 VM for the "laptop" and 1 VM for the "cluster"

The main reason why dind is cheaper, is because we are cheating kubernetes about the resources available. It thinks that there are two nodes available, but in reality they are both sharing the cpus and memory (and disk).

minikube start --driver=docker --nodes=2

Also it is horribly complex to maintain, and seems to cause issues not found elsewhere, but that's another story.

kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.

I don't really see how you can (easilly) run the same environment "at home" (offline), though. πŸ€”

Can see the markdown, but I don't know where the images are created and coming from... ?

https://killercoda.com/kubernetes

https://github.com/killercoda/scenarios-kubernetes


There are a lot of different options, but so far none that uses kind / k3d (from what I can tell) ?

kubernetes-kubeadm-1node

kubernetes-kubeadm-2nodes

kubernetes-k3s-1node

kubernetes-k3s-2nodes

But I guess you can start with the ubuntu image, and then use the documentation to install them ?

So far no support for "nerdctl", which is what one would want to talk with containerd/buildkitd For instance if you want to be able to build images, without having to save/load or push/pull them...

That is the main reason minikube is still defaulting to dockerd (moby), to not have to run two engines. And for quick experiments, it can be very useful to just be able to run a development container locally.


As far as I know, it is up to SIG Docs to find hosting for the tutorials - and killercoda looks like the target

I was just trying to improve the content, so that you don't run software from 2021 instead of from 2023...

version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Our end goal is the same: https://kubernetes.io/blog/2016/07/minikube-easily-run-kubernetes-locally/

sftim commented 1 year ago

I'd like to announce the removal to end users before the switch off happens. See issue #38785.

sftim commented 1 year ago

This issue is relevant to https://github.com/kubernetes/website/pull/38744

afbjorklund commented 1 year ago

@nikitar did you have any other alternatives for migration hosting, besides killercoda ?

kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.

This seems accurate, and there is no longer a need to run start.sh or launch.sh for the other tutorials:

https://itnext.io/katacoda-to-killercoda-migration-guide-d21961fc0c9b

"On Katacoda it was (sometimes) necessary to run a magical launch.sh to get that Kubernetes environment running. This is no longer necessary on Killercoda because all K8s environments are always running and ready!"

  1. I am not sure what the side effects are, when they bypass the 2 vCPU requirement from kubeadm ? --ignore-preflight-errors=NumCPU

  2. Currently killercoda is running with the "Canal" CNI, which is a combination of Calico and Flannel: https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel kubectl apply -f canal.yaml

  3. It is running a docker daemon to do builds, and then those images needs to be imported to containerd: docker build -t myimage . && docker save myimage | ctr --namespace=k8s.io images import --digests -

  4. I don't know if it is possible to access the kubernetes dashboard, but it seemed like a work-in-progress ? https://killercoda.com/examples/scenario/kubernetes-dashboard shows how to do it "The Hard Way"


If you want to continue to support hello minikube, then that could be done with the ubuntu image.

https://kubernetes.io/docs/tutorials/hello-minikube/

https://killercoda.com/examples/scenario/ubuntu-simple

It would need some fixes, such as the bugs mentioned above or running with smaller amount of resources. ``` β›” Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1 ``` ``` [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 ``` * https://github.com/kubernetes/minikube/issues/15609 If you try run with the default config, then you will also run into another issue when running docker as root: ``` πŸ›‘ The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force. πŸ’‘ If you are running minikube within a VM, consider using --driver=none: πŸ“˜ https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ``` ``` β›” Exiting due to RSRC_INSUFFICIENT_CORES: Docker has less than 2 CPUs available, but Kubernetes requires at least 2 to be available ``` * https://github.com/kubernetes/minikube/issues/15608

But for the rest of the scenarios, there should be nothing blocking the katacoda to killercoda conversion.

afbjorklund commented 1 year ago

I made a patched version of minikube (for NumCPU), and using this version it runs on killercoda:

```console ubuntu $ minikube version minikube version: v1.28.0 commit: b45c13b3eb399ba7c63db31609f4a013b1f8d638 ubuntu $ minikube start --driver=none --force πŸ˜„ minikube v1.28.0 on Ubuntu 20.04 (amd64) ❗ minikube skips various validations when --force is supplied; this may lead to unexpected behavior ✨ Using the none driver based on existing profile β›” Requested cpu count 2 is greater than the available cpus of 1 β›” None has less than 2 CPUs available, but Kubernetes requires at least 2 to be available β›” Requested cpu count 2 is greater than the available cpus of 1 β›” None has less than 2 CPUs available, but Kubernetes requires at least 2 to be available 🧯 The requested memory allocation of 1983MiB does not leave room for system overhead (total system memory: 1983MiB). You may face stability issues. πŸ’‘ Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1983mb' πŸ‘ Starting control plane node minikube in cluster minikube πŸ”„ Restarting existing none bare metal machine for "minikube" ... ℹ️ OS release is Ubuntu 20.04.5 LTS 🐳 Preparing Kubernetes v1.25.3 on Docker 20.10.12 ... β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf β–ͺ Generating certificates and keys ... β–ͺ Booting up control plane ... β–ͺ Configuring RBAC rules ... 🀹 Configuring local host environment ... ❗ The 'none' driver is designed for experts who need to integrate with an existing VM πŸ’‘ Most users should use the newer 'docker' driver instead, which does not require root! πŸ“˜ For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ❗ kubectl and minikube configuration will be stored in /root ❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: β–ͺ sudo mv /root/.kube /root/.minikube $HOME β–ͺ sudo chown -R $USER $HOME/.kube $HOME/.minikube πŸ’‘ This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true πŸ”Ž Verifying Kubernetes components... β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: default-storageclass, storage-provisioner πŸ’‘ kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ubuntu $ minikube kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:49:09Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"} ubuntu $ minikube kubectl -- get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ubuntu NotReady control-plane 50s v1.25.3 172.30.1.2 Ubuntu 20.04.5 LTS 5.4.0-131-generic docker://20.10.12 ``` There is of course a large number of warnings, and you do have to install some extra dependencies: ```console sudo apt update sudo apt install -y conntrack socat curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb sudo dpkg -i minikube_latest_amd64.deb curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.0/cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb sudo dpkg -i cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb ``` But cri-tools and cni-plugins have already been installed, from the deprecated [kubic](https://build.opensuse.org/project/show/devel:kubic:libcontainers:stable) repositories. ``` cri-tools/now 1.21.0~2 amd64 [installed,local] containernetworking-plugins/now 100:1.1.1~1 amd64 [installed,local] ``` Docker was installed, using the system packages (not from the vendor packages, on docker.com). https://packages.ubuntu.com/focal-updates/docker.io https://packages.ubuntu.com/focal-updates/containerd
sftim commented 1 year ago

@reylejano I updated the issue description to highlight the new GitHub discussion.

afbjorklund commented 1 year ago

I would suggest to remove minikube, also from the first scenario (all other scenarios start with it running).

https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/ - call it "Connect to Cluster" or such ?

This will allow migrating to killercoda, and leaving "Hello, Minikube" as the only page needing minikube.


Then, if someone wants to contribute scenarios on how to install and run minikube or kind "from scratch"...

But those can probably refer to the project documentation, and link straight over to minikube and kind sites ?

The killercoda environment comes with a pre-installed and running kubeadm environment, without start.sh

For the user, this means they can start directly with kubectl without having to first look at the cluster installer.

nitishfy commented 1 year ago

I would suggest to remove minikube, also from the first scenario (all other scenarios start with it running).

https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/ - call it "Connect to Cluster" or such ?

This will allow migrating to killercoda, and leaving "Hello, Minikube" as the only page needing minikube.

Then, if someone wants to contribute scenarios on how to install and run minikube or kind "from scratch"...

But those can probably refer to the project documentation, and link straight over to minikube and kind sites ?

The killercoda environment comes with a pre-installed and running kubeadm environment, without start.sh

For the user, this means they can start directly with kubectl without having to first look at the cluster installer.

That's a good approach! However, I'm likely unsure if sig-docs is maintaining the minikube documentation. How would it sound to redirect an important topic link of Kubernetes documentation to the documentation outside the Kubernetes docs :( ?

afbjorklund commented 1 year ago

However, I'm likely unsure if sig-docs is maintaining the minikube documentation.

We are talking about the "Hello, Minikube" page for Katacoda here, not the minikube documentation... Just saying that when the online tutorial is removed (with Katacoda), there will not be so much left of it ?

So it would then look more like the information on this page: https://kubernetes.io/docs/tasks/tools/ A short introduction, and then a link with details. Already happens, with e.g. the container runtimes ?

It's fine to keep the separate page of course, but then it needs to be kept in sync with the project.

And the k8s.io front page today, skips right over it - and goes straight to "Learn Kubernetes Basics"

sftim commented 1 year ago

This will allow migrating to killercoda

Please use https://github.com/kubernetes/website/discussions/38878 to discuss what should replace Katacoda - we haven't yet decided.

sftim commented 1 year ago

I'm proposing that we work towards removing Katacoda by the end of March 2023. How does that sound, Kubernetes folks?

sftim commented 1 year ago

Banner to announce that: https://github.com/kubernetes/website/pull/39257