Closed mattmoor closed 5 years ago
I also think the Scale sub-resource would be a useful addition to revisions as a standard way for autoscalers to effect scaling. This gives the revision controller the option of modifying or ignoring the request, unlike the alternative of autoscaling the deployment directly.
Yes perhaps, although I'm not sure the form that takes is appropriate for us?
I think the payload is a single integer (foggy recollection of things people told me), which feels like the kind of thing we have been avoiding in the spec, since that feels like "how many servers" and we're after "serverless".
That said, I could see this being useful to force Reserve
-> Active
with some minimum "concurrent request" capacity in anticipation of load spikes beyond our capacity to hyperscale up (e.g. if more cluster capacity is needed).
cc @josephburnett for this discussion, but let's track that with a separate issue since it involves a bit of design and specification.
It's less about users and more about implementors of autoscaling strategies. But you're right, off-topic, plus @josephburnett is out until Tuesday and I don't want to start any official discussions without him. :smiley: This document in the team drive has more details if anyone is interested.
/assign @grantr
Moving to M5, since the GKE 1.10 alpha clusters have been showing some issues. I don't think the importance of this has diminished, but I'm less optimistic that we'll be able to make the switch to 1.10 smoothly in M4.
/assign @dprotaso @bsnchan
Since they have a PR. I believe this is blocked on 1.10, which is coming in hot for M5 (it is GA in GKE, but some of our monitoring stuff has issues with it). I'm not particularly optimistic that this will land.
If I understood correctly, CRD with subresources is in v1beta1 in k8s 1.11 https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/
@dprotaso @bsnchan This should be unblocked. GKE has 1.11, so we can actually test with sub-resources 🎉
Here's a backwards compatible strategy for adopting CRD status subresource with the aim of eventually dropping the generation bumping that's occurring in the webhook.
Enable status subresource on all CRDs
This causes the API server to start bumping a resource's metadata.generation
(this is actually the default in K8s 1.13 I believe).
Our controllers can and will now use the /status
endpoint. This prevents them from 'accidentally' updating the reconciled object's spec
Drop our functional dependency on spec.generation
spec.generation
as deprecated ie. Configuration controller resourcenames.Revision
metadata.generation
spec.generation
) metadata.generation
(this is essentially a migration of the latest created revision)Fully drop our dependency on spec.generation
Spec.Generation
Remove Spec.Generation
from the API (under consideration)
spec
it's a noop
Change the label applied to revisions from configurationMetadataGeneration
back to configurationGeneration
configurationGeneration
became 'unavailable' during 0.3 but should now be able to be reclaimedconfigurationMetadataGeneration
and performing another migrationDrop functional dependency on configurationMetadataGeneration
No longer apply the configurationMetadataGeneration
label to revisions
Spec.Generation
in the API officially by creating a new CRD versionSpec.Generation
when we upgrade the CRD to the new versionspec.generation
to always be 0Moving to 0.4 for the next phase of the work.
/milestone Serving 0.4
@mattmoor: The provided milestone is not valid for this repository. Milestones in this repository: [Ice Box
, Needs Triage
, Serving "v1" (ready for production)
, Serving 0.3
, Serving 0.4
]
Use /milestone clear
to clear the milestone.
/milestone Serving 0.4
0.4 pieces have landed, moving out.
/milestone Serving 0.6 PRs for 0.5 scope are in 0.5 now, so moving this to 0.6 to track remaining work.
@dprotaso Do you plan to do the 0.6 portion of this?
@dprotaso Do you plan to do the 0.6 portion of this?
Oh yeah
/close
@dprotaso: Closing this issue.
it is done.
IIRC K8s 1.10 added support for "status" sub-resources in CRDs.
There are a variety of places this is useful to us, including
updateStatus
in each of the controllers, and (likely) the validation logic in the webhook.Let's scout this functionality in M4, and if useful adopt.
See also this issue, which tracks a 1.10 update.