kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.49k stars 14.41k forks source link

Improve the indexing of generated kubeadm references #24558

Open tengqm opened 4 years ago

tengqm commented 4 years ago

This is a Feature Request

What would you like to be added

Automated indexing of the generated kubeadm references under docs/reference/setup-tools/kubeadm/generated/.

Why is this needed

The current practice of manually creating and maintaining overview pages for these commands is not sustainable.

Comments

For background, see #24542.

neolit123 commented 4 years ago

this comment has a proposal: https://github.com/kubernetes/website/issues/24542#issuecomment-707804221

i think we could do the following:

EDIT: and if the kubeadm commands lack sufficient information it should be added in the source code of kubeadm.

would navigating around /generated pages sound like a good idea? e.g. https://kubernetes.io/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view/

@sftim also proposed a couple of ways this can be implemented.

i think i can help on the generator side but not for Hugo related changes.

neolit123 commented 4 years ago

/sig cluster-lifecycle docs

sftim commented 4 years ago

/triage accepted

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

neolit123 commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

sftim commented 3 years ago

Still worth doing I think /remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

neolit123 commented 3 years ago

/remove-lifecycle stale i can try helping in 1.23.

neolit123 commented 3 years ago

@tengqm i started playing with this but i noticed one problem in kubeadm. kubeadm recently added a panic if the binary build version information is not included via Go's ldflags to indicated to users that the binary was not built properly, which blocks the usage of https://github.com/kubernetes-sigs/reference-docs/blob/master/gen-compdocs/generators/gen_kube_docs.go#L89 in the generator (and a call to go run main.go build kubeadm).

here is a fix for that: https://github.com/kubernetes/kubernetes/pull/104338

there seem to be other problems with the current k/k branch though, which i'm investigating:

$ go run main.go build kubeadm
# k8s.io/kubernetes/pkg/volume/util/subpath
../../../k8s.io/kubernetes/pkg/volume/util/subpath/subpath_linux.go:214:18: mounter.MountSensitiveWithoutSystemdWithMountFlags undefined (type mount.Interface has no field or method MountSensitiveWithoutSystemdWithMountFlags)
# k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/conversion.go:33:115: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/conversion.go:38:167: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/defaults.go:32:68: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:48:89: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:49:171: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:53:34: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:54:119: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:61:119: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:66:171: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
# k8s.io/kubernetes/pkg/kubelet/cri/remote
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:573:71: undefined: v1alpha2.PodSandboxStats
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:578:30: r.runtimeClient.PodSandboxStats undefined (type v1alpha2.RuntimeServiceClient has no field or method PodSandboxStats)
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:578:53: undefined: v1alpha2.PodSandboxStatsRequest
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:594:60: undefined: v1alpha2.PodSandboxStatsFilter
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:594:98: undefined: v1alpha2.PodSandboxStats
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:600:30: r.runtimeClient.ListPodSandboxStats undefined (type v1alpha2.RuntimeServiceClient has no field or method ListPodSandboxStats)
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:600:57: undefined: v1alpha2.ListPodSandboxStatsRequest

EDIT: ok, looks like the problem is that the go.mod file of the reference docs is pinning a tag for the k8s.io/ repositories e.g.:

    k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.22.0

but if you are trying to build from k/k latest, some code there might require the latest k8s.io/kube-controller-manager and not the v0.22 tag. a fix for that would be to pin everything to the staging directory under k/k.

this allows you to always depend on this k/k version when generating the reference content from your k/k clone:

``` module github.com/kubernetes-sigs/reference-docs/gen-compdocs go 1.16 require ( github.com/golangplus/testing v1.0.0 // indirect github.com/spf13/cobra v1.1.3 github.com/spf13/pflag v1.0.5 github.com/yuin/goldmark v1.3.5 github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 k8s.io/component-base v0.0.0 k8s.io/kubectl v0.0.0 k8s.io/kubernetes v0.0.0 ) replace ( k8s.io/api => ../../../k8s.io/kubernetes/staging/src/k8s.io/api k8s.io/apiextensions-apiserver => ../../../k8s.io/kubernetes/staging/src/k8s.io/apiextensions-apiserver k8s.io/apimachinery => ../../../k8s.io/kubernetes/staging/src/k8s.io/apimachinery k8s.io/apiserver => ../../../k8s.io/kubernetes/staging/src/k8s.io/apiserver k8s.io/cli-runtime => ../../../k8s.io/kubernetes/staging/src/k8s.io/cli-runtime k8s.io/client-go => ../../../k8s.io/kubernetes/staging/src/k8s.io/client-go k8s.io/cloud-provider => ../../../k8s.io/kubernetes/staging/src/k8s.io/cloud-provider k8s.io/cluster-bootstrap => ../../../k8s.io/kubernetes/staging/src/k8s.io/cluster-bootstrap k8s.io/code-generator => ../../../k8s.io/kubernetes/staging/src/k8s.io/code-generator k8s.io/component-base => ../../../k8s.io/kubernetes/staging/src/k8s.io/component-base k8s.io/component-helpers => ../../../k8s.io/kubernetes/staging/src/k8s.io/component-helpers k8s.io/controller-manager => ../../../k8s.io/kubernetes/staging/src/k8s.io/controller-manager k8s.io/cri-api => ../../../k8s.io/kubernetes/staging/src/k8s.io/cri-api k8s.io/csi-translation-lib => ../../../k8s.io/kubernetes/staging/src/k8s.io/csi-translation-lib k8s.io/kube-aggregator => ../../../k8s.io/kubernetes/staging/src/k8s.io/kube-aggregator k8s.io/kube-controller-manager => ../../../k8s.io/kubernetes/staging/src/k8s.io/kube-controller-manager k8s.io/kube-proxy => ../../../k8s.io/kubernetes/staging/src/k8s.io/kube-proxy k8s.io/kube-scheduler => ../../../k8s.io/kubernetes/staging/src/k8s.io/kube-scheduler k8s.io/kubectl => ../../../k8s.io/kubernetes/staging/src/k8s.io/kubectl k8s.io/kubelet => ../../../k8s.io/kubernetes/staging/src/k8s.io/kubelet k8s.io/kubernetes => ../../../k8s.io/kubernetes k8s.io/legacy-cloud-providers => ../../../k8s.io/kubernetes/staging/src/k8s.io/legacy-cloud-providers k8s.io/metrics => ../../../k8s.io/kubernetes/staging/src/k8s.io/metrics k8s.io/mount-utils => ../../../k8s.io/kubernetes/staging/src/k8s.io/mount-utils k8s.io/pod-security-admission => ../../../k8s.io/kubernetes/staging/src/k8s.io/pod-security-admission k8s.io/sample-apiserver => ../../../k8s.io/kubernetes/staging/src/k8s.io/sample-apiserver ) ```
neolit123 commented 3 years ago

/assign

tengqm commented 3 years ago

@neolit123 Right. Pinning to tags was the resolution to ensure everyone can regenerate the same output if needed, for a particular version. k/k is a moving target we cannot afford to track.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

neolit123 commented 2 years ago

/remove-lifecycle frozen I have this a todo once there is more free time. At least the kubeadm cli does not change that much

neolit123 commented 2 years ago

/remove-lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

neolit123 commented 2 years ago

/lifecycle frozen

k8s-triage-robot commented 1 year ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

divya-mohan0209 commented 11 months ago

/triage accepted

divya-mohan0209 commented 11 months ago

@neolit123 Do you anticipate having some cycles to work on the kubeadm generator side of things in the near future?

sftim commented 11 months ago

(alternatively, maybe SIG Cluster Lifecycle have capacity to support another contributor to work on this generator code?)

neolit123 commented 11 months ago

@neolit123 Do you anticipate having some cycles to work on the kubeadm generator side of things in the near future?

generally no, this will help all sides and it's a nice to have but it's a low priority in my book mainly because our CLI commands (not only kubeadm) are slow moving.

(alternatively, maybe SIG Cluster Lifecycle have capacity to support another contributor to work on this generator code?)

+1 but i don't think anyone will take it. if SIG docs wants to do LFX or GSoC for this it could be a nice task.

neolit123 commented 11 months ago

/unassign