Open tengqm opened 4 years ago
this comment has a proposal: https://github.com/kubernetes/website/issues/24542#issuecomment-707804221
i think we could do the following:
- on the main pages that are currently manually maintained for
kubeadm config
etc, "include" the parent generated page (https://github.com/kubernetes/website/blob/master/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md)- the generated page could have relative hyperlinks to the generated pages of sub-sub-commands (e.g. kubeadm_config.md having links to
kubeadm_config_*.md
, and this can reveal the whole tree relation between sub-commands and sub-sub-commands (and deeper).- this means that ideally the generated pages should have a title on top too: https://github.com/kubernetes/website/blob/master/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md https://github.com/kubernetes/website/blob/master/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view.md and a section for the sub-commands.
EDIT: and if the kubeadm commands lack sufficient information it should be added in the source code of kubeadm.
would navigating around /generated pages sound like a good idea? e.g. https://kubernetes.io/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view/
@sftim also proposed a couple of ways this can be implemented.
i think i can help on the generator side but not for Hugo related changes.
/sig cluster-lifecycle docs
/triage accepted
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Still worth doing I think /remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale i can try helping in 1.23.
@tengqm
i started playing with this but i noticed one problem in kubeadm.
kubeadm recently added a panic if the binary build version information is not included via Go's ldflags to indicated to users that the binary was not built properly, which blocks the usage of
https://github.com/kubernetes-sigs/reference-docs/blob/master/gen-compdocs/generators/gen_kube_docs.go#L89
in the generator (and a call to go run main.go build kubeadm
).
here is a fix for that: https://github.com/kubernetes/kubernetes/pull/104338
there seem to be other problems with the current k/k branch though, which i'm investigating:
$ go run main.go build kubeadm
# k8s.io/kubernetes/pkg/volume/util/subpath
../../../k8s.io/kubernetes/pkg/volume/util/subpath/subpath_linux.go:214:18: mounter.MountSensitiveWithoutSystemdWithMountFlags undefined (type mount.Interface has no field or method MountSensitiveWithoutSystemdWithMountFlags)
# k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/conversion.go:33:115: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/conversion.go:38:167: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/defaults.go:32:68: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:48:89: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:49:171: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:53:34: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:54:119: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:61:119: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
../../../k8s.io/kubernetes/pkg/controller/volume/ephemeral/config/v1alpha1/zz_generated.conversion.go:66:171: undefined: "k8s.io/kube-controller-manager/config/v1alpha1".EphemeralVolumeControllerConfiguration
# k8s.io/kubernetes/pkg/kubelet/cri/remote
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:573:71: undefined: v1alpha2.PodSandboxStats
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:578:30: r.runtimeClient.PodSandboxStats undefined (type v1alpha2.RuntimeServiceClient has no field or method PodSandboxStats)
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:578:53: undefined: v1alpha2.PodSandboxStatsRequest
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:594:60: undefined: v1alpha2.PodSandboxStatsFilter
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:594:98: undefined: v1alpha2.PodSandboxStats
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:600:30: r.runtimeClient.ListPodSandboxStats undefined (type v1alpha2.RuntimeServiceClient has no field or method ListPodSandboxStats)
../../../k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go:600:57: undefined: v1alpha2.ListPodSandboxStatsRequest
EDIT: ok, looks like the problem is that the go.mod file of the reference docs is pinning a tag for the k8s.io/ repositories e.g.:
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.22.0
but if you are trying to build from k/k latest, some code there might require the latest k8s.io/kube-controller-manager and not the v0.22 tag. a fix for that would be to pin everything to the staging directory under k/k.
this allows you to always depend on this k/k version when generating the reference content from your k/k clone:
/assign
@neolit123 Right. Pinning to tags was the resolution to ensure everyone can regenerate the same output if needed, for a particular version. k/k is a moving target we cannot afford to track.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle frozen I have this a todo once there is more free time. At least the kubeadm cli does not change that much
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted
@neolit123 Do you anticipate having some cycles to work on the kubeadm generator side of things in the near future?
(alternatively, maybe SIG Cluster Lifecycle have capacity to support another contributor to work on this generator code?)
@neolit123 Do you anticipate having some cycles to work on the kubeadm generator side of things in the near future?
generally no, this will help all sides and it's a nice to have but it's a low priority in my book mainly because our CLI commands (not only kubeadm) are slow moving.
(alternatively, maybe SIG Cluster Lifecycle have capacity to support another contributor to work on this generator code?)
+1 but i don't think anyone will take it. if SIG docs wants to do LFX or GSoC for this it could be a nice task.
/unassign
This is a Feature Request
What would you like to be added
Automated indexing of the generated kubeadm references under
docs/reference/setup-tools/kubeadm/generated/
.Why is this needed
The current practice of manually creating and maintaining overview pages for these commands is not sustainable.
Comments
For background, see #24542.