kubernetes / website

Kubernetes website and documentation repo:
https://kubernetes.io
Creative Commons Attribution 4.0 International
4.36k stars 14.11k forks source link

YAML files are wrong for older versions of Kubernetes docs. #8051

Closed steveperry-53 closed 5 years ago

steveperry-53 commented 6 years ago

This is a...

Problem: This is a general problem with older versions of the Kubernetes docs. But I'll illustrate the problem with an example.

Here's a task page from the 1.8 docs. The page has a YAML file for a Deployment. The group and version are correct for Kubernetes 1.8: apps/v1beta1.

apiVersion: apps/v1beta1
kind: Deployment
...

But the kubectl command in the next step fetches the YAML file from a directory of the current version of Kubernetes (1.10 as of April 2018). So the command fetches a YAML file that has apps/v1.

apiVersion: apps/v1
kind: Deployment

If you are running Kubernetes 1.8 (and this is the default for GKE in April 2018), and you give the command shown in the Kubernetes 1.8 docs, you get an error.

kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello.yaml
Error from server (BadRequest): error when creating
"https://k8s.io/docs/tasks/access-application-cluster/hello.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
no kind "Deployment" is registered for version "apps/v1"

Proposed Solution: Good question. I'll think about it. Anybody have a suggestion?

Page to Update: Not sure. I don't see how we can update all of the tasks in our versioned docs.

tengqm commented 6 years ago

The v1.8 archive contains the right version of YAML for users to work (with kubernetes 1.8), the master (current version) contains the right version of YAML for users as well, to be used with kubernetes 1.10. The problem lies in the command used. It is always pointing to the "current" version. So my suggestion would be remove the hard-coded links from tasks, in other words, don't use HTTP urls. The more URLs, the more broken links to maintain/fix.

In this example, I'd prefer telling users do:

kubectl create -f hello.yaml

Where the 'hello.yaml' file comes from then? The user can have a good guess. I know this suggestion is contradict to our current guidelines, but I think this is the most straightforward solution.

For all embedded YAML files, there is a hyperlink allowing users to access the RAW yaml, there is a button to copy the file content to clipboard. Looks to me that is already sufficient.

heckj commented 6 years ago

Wow, yeah - that's a nasty one. I've thought of some solutions, but my preference is leaning to discourage the use of the pattern of loading configurations with the -f https://kubernetes.io/... in the documentation, instead embedding the content in the pages, recognizing that in complex examples that's not ideal.

If we had a consistent version name per release and multiple sites, we could embed the versioning in the URL path, which we ensure that it is not accidentally mapping across versions. Another option would be to use a liquid tag to identify the correct versioned prefix of the URL (that is, 'https://kubernetes.io' vs https://v1-8.docs.kubernetes.io), but that may actually be much harder to manage and enforce than just not referencing the link - and this whole setup is likely to get perturbed a bit with the migration to Hugo and it's navigation/format updates.

I'm also very loathe to open the door to managing back-copies of the documentation, as I just don't think we have the human resources available to do those updates consistently as well as validating that examples work over the changes, etc. We don't honestly keep up with all the tasks as it is with the current set, let alone trying to validate fixes to older versions.

/cc @kubernetes/sig-docs-maintainers

zacharysarah commented 6 years ago

This is rough. We keep running into it, but after several months of consideration I'm still not sure what the solution is, either. Like @heckj says:

I'm also very loathe to open the door to managing back-copies of the documentation, as I just don't think we have the human resources available to do those updates consistently as well as validating that examples work over the changes, etc. We don't honestly keep up with all the tasks as it is with the current set, let alone trying to validate fixes to older versions.

I agree completely, but I'm also unwilling to dismiss it solely as a headcount problem rather than a challenge some dedicated DocOps attention could solve. Do folks think this is the sort of challenge that could be adequately solved by a contractor post-Hugo migration? It seems like maintenance would be cyclical and only occasional; would it be possible for us to perform ourselves once a solution was implemented?

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten