Closed reylejano closed 1 year ago
/triage accepted
https://github.com/kubernetes-sigs/contributor-katacoda It's the place where we work on katacoda at Contribex.
Hey @reylejano Do we have to remove all these pages you've mentioned in the Pages to Update
@Debanitrkl mentioned on Slack that SIG ContribEx is looking at killercoda
@shannonxtreme mentioned turning the katacoda tutorials to use minikube
Do we as SIG Docs want to continue to maintain these tutorials if they use killercoda or minikube? Should we turn the tutorials into instructions to use on any Kubernetes cluster?
@Debanitrkl mentioned on Slack that SIG ContribEx is looking at killercoda
@shannonxtreme mentioned turning the katacoda tutorials to use minikube
Do we as SIG Docs want to continue to maintain these tutorials if they use killercoda or minikube? Should we turn the tutorials into instructions to use on any Kubernetes cluster?
Imo we can direct users to playgrounds or minikube or whatever (maybe a standard before you begin that says set up a Kubernetes cluster using an option such as X, Y, or Z) and keep our responsibilities scoped to platform agnostic instructions.
I just enjoyed the interactive tutorials, but maintaining their compatibility and maintenance with platform X shouldn't be on SIG Docs
As far as I know, the katacoda examples are using minikube at the moment ?
$ start.sh
Starting Kubernetes...minikube version: v1.18.0
commit: ec61815d60f66a6e4f6353030a40b12362557caa-dirty
* minikube v1.18.0 on Ubuntu 18.04 (amd64)
* Using the none driver based on existing profile
An older version (1.18 vs 1.25), and run from a control plane terminal, but anyway.
Note: the "none" driver of minikube is basically just a wrapper around kubeadm
It also deploys the kubernetes dashboard, which is otherwise equally painful to set up.
Yeah they are using minikube so we'd basically be directing the user to use minikube wherever they want, instead of using the katacoda playground. It'll be less directly interactive sadly
I added some notes and links, on how to improve the minikube "none" driver for this scenario.
EDIT: also added an example on how to use the "docker" driver for setting up two fakenodes
Providing the virtual machine and the web console for it would be up to the "learning platform".
The jupyter notebooks are quite nice.
https://github.com/kubernetes-client/python/blob/master/examples/notebooks/README.md
Unfortunately they use python, not kubectl...
https://github.com/kubernetes-client/python/blob/master/examples/notebooks/create_deployment.ipynb
I suggest that we change the page templates so that any page using a Katacoda shortcode gets a removal warning, to explain that Katacoda is shutting down.
https://gohugo.io/templates/shortcode-templates/#checking-for-existence explains how to write a conditional based on the presence of a shortcode.
FYI: We use Katacoda heavily for the Thanos project, and we discussed some alternatives to consider - but definitely not in so short timespan: https://github.com/thanos-io/thanos/issues/5385
TL;DR https://killercoda.com/ (mentioned already) or https://instruqt.com/
Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site. My apologies for the miscommunication. @reylejano I've sent you an email with more context.
Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site. My apologies for the miscommunication. @reylejano I've sent you an email with more context.
Does that mean even this is supported, because it also comes under Kubernetes upstream but not directly linked to kubernetes.io
Does that mean even this is supported, because it also comes under Kubernetes upstream but not directly linked to kubernetes.io
https://katacoda.com/k8scontributors will not survive the shutdown of https://katacoda.com/ and does need to be migrated. I recommend @Debanitrkl that you file a separate issue elsewhere, for SIG ContribEx to track.
/remove-priority important-soon
We need to decide what priority to set for this.
@reylejano I took the liberty of updating the issue description to strike through the proposed solution. Katacoda for Kubernetes is still there.
Hi, i think this https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/ is skipped in the above list. just found this while going through the docs.
Is it worth migrating these to https://killercoda.com/ ? Given it's the organisation that already host the CKA/CKS mock exams it might be a good fit?
Is it worth migrating these to https://killercoda.com/ ?
There's two ways to consider that question:
My feeling is that the answers are Yes and Yes respectively. Migrating to something that's not Katacoda might help readers, even with the need to sign up, but the effort it'd require is best spent elsewhere (I have a query to illustrate).
Hi, I think this https://kubernetes.io/docs/contribute/style/style-guide/#katacoda-embedded-live-environment is skipped in the above list. just found this while going through the docs.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Are we going to fully adapt killercoda ? because some katacoda scenarios have unsupported versions and as it is not maintained further all are bound to have the same someday. Am I right? or there is more to the situation ? Also have to say killercoda has faster loading then katacoda scenarios.
This docs in killercoda says developer can use their katacoda repositories as well. So they can Import existing Katacoda scenarios and run them as killercoda scenarios too. It seems workable if this can be done. Anyone tried this? do we have any katacoda author to verify this?
This docs in killercoda says developer can use their katacoda repositories as well. So they can Import existing Katacoda scenarios and run them as killercoda scenarios too. It seems workable if this can be done.
Anyone tried this?
I don't think so.
do we have any katacoda author to verify this?
The team behind Katacoda aren't likely to know the answer, to be honest: killercoda is a separate thing.
It should be possible to port most scenarios from Katacoda to Killercoda, many have done it. But there are also differences and there is a Katacoda Migration Guide for this
to be honest: killercoda is a separate thing.
I agree with you there. @wuestkamp I have seen this guide, but haven't tried or seen anyone successfully try this. if it is possible then using killercoda in k8s would also be feasible in all interactive tutorials.
/lifecycle frozen /priority important-soon
Hello, my name is Jesse Meeks and I am the product manager responsible for Katacoda here at O'Reilly and I am happy to put this team in touch with the Katacoda devs on our side in an effort to help you to migrate to Killercoda.
It is correct that the public facing Katacda.com site is not longer supported and it is our intent to take the site down entirely by the end of the year, though this is something Id like to discuss.
Kubernetes.io is the last remaining service running from the public Katacoda hosts and, in fact, has been seeing quite a bit of activity lately. This activity, while fantastic for Kubernetes.io, unfortunately has prompted some scrutiny and increased urgency around finalizing the shut down Katacoda.com. Because Katacoda will only be offered as a private service to O'Reilly customers upon shut down of the Katcoda.com site, there won't be a way for us to embed a publicly accessible Katacoda scenario outside of Oreilly.com, effectively breaking the terminal in your documentation.
As I mentioned, I'd like to be on the same page with regards to a shut down date, I certainly don't want to leave the team in a compromised position. If it is helpful to do so, I am happy to jump on a call or correspond via email: jmeeks@oreilly.com
@JFMeeks Has there been a change in approach since this comment in June that Kubernetes.io would not be affected by the public Katacoda shutdown? https://github.com/kubernetes/website/issues/33936#issuecomment-1145151104
Hi team. I run the product team at O'Reilly, and we've set up a separate server for Kubernetes.io. The Katacoda scenarios on those pages will not be affected by the shutdown of the public site.
Perhaps there's been additional communication since then that's not reflected in this issue.
Perhaps there's been additional communication since then that's not reflected in this issue.
Surely https://github.com/kubernetes/website/issues/33936#issuecomment-1346832443 is that communication.
https://github.com/kubernetes/website/issues/33936#issuecomment-1346832443 was originally posed as https://github.com/kubernetes/website/issues/37817#issuecomment-1341412180; however, this is the better issue to track the removal, now it's confirmed as something we must do.
I inquired with killercoda about minikube support, but the bigger question is where to migrate in general. I'm happy to help with the work, but I'm not a maintainer, so I'm not really in position to pick the destination. And I imagine there are standard processes (RFC?) to make those sorts of decisions.
@sftim @afbjorklund What's the process for making those sorts of decisions?
Killercoda folks have no plans of adding minikube to their k8s images, but if it's really necessary they suggest using the ubuntu image and installing minikube manually. Presumably it'd look something like
apt install virtualbox
curl -sL https://path/to/minikube/binary
It is not necessary, as we have debated elsewhere all of minikube and kind and kubeadm are valid ways of installing k8s.
You don't need VirtualBox to run minikube, it is one of the original drivers but it is not even the only hypervisor anymore...
Here is old katacoda config:
$ more .minikube/config/config.json
{
"ShowBootstrapperDeprecationNotification": false,
"WantNoneDriverWarning": false,
"WantReportErrorPrompt": false,
"WantUpdateNotification": false,
"driver": "none",
"kubernetes-version": "v1.20.2"
}
Basically the same as command:
minikube start --driver=none --kubernetes-version=v1.20.2
https://minikube.sigs.k8s.io/docs/start/
The main goal of the "machine" provisioning is to create a machine and provision (install) a container runtime, without having to use a proprietary tool like Docker Desktop. If you already have one, you can instead choose to run nested containers (like kind). Finally, once the machine (nodes) is up and running it will bootstrap the cluster (with kubeadm). The rest is just lifecycle tools (including autopause), and convenience wrappers for features such as image or dashboard.
If you want to do your own provisioning and bootstrapping, then you don't have to use minikube
but invent your own.
they suggest using the ubuntu image and installing minikube manually
you don't have to install virtualbox, but you do have to install all the other requirements for it to work
Basically it comes to down to installing all requirements for kubeadm, and also preloading the images ? You would also have to download and install the kubernetes components, but minikube does this for you.
Side note: this will soon also include the CRI and CNI (currently installed as part of the OS, not Kubernetes):
Naturally you also have do all the other cluster configuration, such as CRI and CNI and untainting master etc Like so: https://github.com/lima-vm/lima/blob/v0.14.2/examples/k8s.yaml (here using containerd and flannel)
But it was just easier to do the minikube start
?
Or maybe curl -fsSL https://get.k8s.io | bash
@afbjorklund Thanks for the information. I'm gonna hold off on looking farther into this until there is a decision on whether/where to migrate the tutorials. (Would love to understand how those kinds of decisions are made, I assume one of the SIGs would be involved)
To be clear, killercoda does offer pre-configured k8s images too (see the 'Environments' section here), they just don't include minikube. kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.
I'm gonna hold off on looking farther into this until there is a decision on whether/where to migrate the tutorials.
I added some new bug reports, on how to improve the appearance of the minikube "none" driver...
Even if it ends up not being used for the kubernetes online tutorials anymore (πΏ), it is still a bit broken.
Currently it is a bit indecisive about installing on the node (like kubeadm), or in nested containers (like kind) But that seems to apply to all environments, even though running on the control plane is an anti-pattern ?
But in a cloud context, it would be more expensive to have 1 VM for the "laptop" and 1 VM for the "cluster"
The main reason why dind is cheaper, is because we are cheating kubernetes about the resources available. It thinks that there are two nodes available, but in reality they are both sharing the cpus and memory (and disk).
minikube start --driver=docker --nodes=2
Also it is horribly complex to maintain, and seems to cause issues not found elsewhere, but that's another story.
kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.
I don't really see how you can (easilly) run the same environment "at home" (offline), though. π€
Can see the markdown, but I don't know where the images are created and coming from... ?
https://killercoda.com/kubernetes
https://github.com/killercoda/scenarios-kubernetes
There are a lot of different options, but so far none that uses kind
/ k3d
(from what I can tell) ?
kubernetes-kubeadm-1node
kubernetes-kubeadm-2nodes
kubernetes-k3s-1node
kubernetes-k3s-2nodes
But I guess you can start with the ubuntu
image, and then use the documentation to install them ?
So far no support for "nerdctl", which is what one would want to talk with containerd
/buildkitd
For instance if you want to be able to build images, without having to save/load or push/pull them...
That is the main reason minikube is still defaulting to dockerd
(moby), to not have to run two engines.
And for quick experiments, it can be very useful to just be able to run a development container locally.
As far as I know, it is up to SIG Docs to find hosting for the tutorials - and killercoda looks like the target
I was just trying to improve the content, so that you don't run software from 2021 instead of from 2023...
version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Our end goal is the same: https://kubernetes.io/blog/2016/07/minikube-easily-run-kubernetes-locally/
I'd like to announce the removal to end users before the switch off happens. See issue #38785.
This issue is relevant to https://github.com/kubernetes/website/pull/38744
@nikitar did you have any other alternatives for migration hosting, besides killercoda ?
kubernetes-kubeadm-1node can be used for all tutorials except the minikube one, I imagine.
This seems accurate, and there is no longer a need to run start.sh
or launch.sh
for the other tutorials:
https://itnext.io/katacoda-to-killercoda-migration-guide-d21961fc0c9b
"On Katacoda it was (sometimes) necessary to run a magical launch.sh
to get that Kubernetes environment running. This is no longer necessary on Killercoda because all K8s environments are always running and ready!"
I am not sure what the side effects are, when they bypass the 2 vCPU requirement from kubeadm
?
--ignore-preflight-errors=NumCPU
Currently killercoda is running with the "Canal" CNI, which is a combination of Calico and Flannel:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel kubectl apply -f canal.yaml
It is running a docker daemon to do builds, and then those images needs to be imported to containerd:
docker build -t myimage . && docker save myimage | ctr --namespace=k8s.io images import --digests -
I don't know if it is possible to access the kubernetes dashboard, but it seemed like a work-in-progress ?
https://killercoda.com/examples/scenario/kubernetes-dashboard shows how to do it "The Hard Way"
If you want to continue to support hello minikube
, then that could be done with the ubuntu
image.
https://kubernetes.io/docs/tutorials/hello-minikube/
https://killercoda.com/examples/scenario/ubuntu-simple
But for the rest of the scenarios, there should be nothing blocking the katacoda to killercoda conversion.
I made a patched version of minikube
(for NumCPU), and using this version it runs on killercoda:
@reylejano I updated the issue description to highlight the new GitHub discussion.
I would suggest to remove minikube, also from the first scenario (all other scenarios start with it running).
https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/ - call it "Connect to Cluster" or such ?
This will allow migrating to killercoda, and leaving "Hello, Minikube" as the only page needing minikube
.
Then, if someone wants to contribute scenarios on how to install and run minikube or kind "from scratch"...
But those can probably refer to the project documentation, and link straight over to minikube and kind sites ?
The killercoda environment comes with a pre-installed and running kubeadm environment, without start.sh
For the user, this means they can start directly with kubectl
without having to first look at the cluster installer.
I would suggest to remove minikube, also from the first scenario (all other scenarios start with it running).
https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/ - call it "Connect to Cluster" or such ?
This will allow migrating to killercoda, and leaving "Hello, Minikube" as the only page needing
minikube
.Then, if someone wants to contribute scenarios on how to install and run minikube or kind "from scratch"...
But those can probably refer to the project documentation, and link straight over to minikube and kind sites ?
The killercoda environment comes with a pre-installed and running kubeadm environment, without
start.sh
For the user, this means they can start directly with
kubectl
without having to first look at the cluster installer.
That's a good approach! However, I'm likely unsure if sig-docs is maintaining the minikube documentation. How would it sound to redirect an important topic link of Kubernetes documentation to the documentation outside the Kubernetes docs :( ?
However, I'm likely unsure if sig-docs is maintaining the minikube documentation.
We are talking about the "Hello, Minikube" page for Katacoda here, not the minikube documentation... Just saying that when the online tutorial is removed (with Katacoda), there will not be so much left of it ?
So it would then look more like the information on this page: https://kubernetes.io/docs/tasks/tools/ A short introduction, and then a link with details. Already happens, with e.g. the container runtimes ?
It's fine to keep the separate page of course, but then it needs to be kept in sync with the project.
And the k8s.io front page today, skips right over it - and goes straight to "Learn Kubernetes Basics"
This will allow migrating to killercoda
Please use https://github.com/kubernetes/website/discussions/38878 to discuss what should replace Katacoda - we haven't yet decided.
I'm proposing that we work towards removing Katacoda by the end of March 2023. How does that sound, Kubernetes folks?
Banner to announce that: https://github.com/kubernetes/website/pull/39257
Problem
(The public part of) Katacoda shut down on June 15, 2022 https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html
The remaining part of Katacoda, that Kubernetes uses, is due to shut down ~late 2022~ early 2023.
Related to https://github.com/kubernetes/website/issues/33918 and https://github.com/kubernetes/website/issues/38785
Discussion
Use the GitHub Discussion to discuss what we should replace Katacoda with.
Specific steps
SIG Docs members propose to edit pages that use Katacoda to mark that the sandbox is unavailable. When an alternative is available then update the affected pages to use the alternative.
Pages to Update: https://kubernetes.io/docs/tutorials/hello-minikube/ https://kubernetes.io/docs/tutorials/kubernetes-basics/ https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-interactive/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-interactive/ https://kubernetes.io/docs/tutorials/configuration/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice/ https://kubernetes.io/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/ https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/ https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-removal
Additional Information
/remove-kind bug /priority important-soon