Closed vagababov closed 4 years ago
@vagababov: The label(s) kind/proposal
cannot be applied. These labels are supported: ``
@vagababov: The label(s) kind/proposal
cannot be applied. These labels are supported: ``
Hi, I'm at student at UT Austin and am looking to contribute to container-related project as part of my virtualization course. I'm interested in picking this issue up, could I get assigned to this issue?
Hi, I'm also apart of the same group for the project. Could I get assigned to this issue as well?
Hi, welcome to the Knative.
Also part of it I have already started to work (the second part), but the first one is still for grabs.
Hi @vagababov,
Our final project asks that we make a contribution to an open source project that is tangentially related to virtualization: e.g. serverless frameworks
KNative had a strong community that had active members that we could find thus far. My partner and I were also interested in learning more about it.
This issue seemed easier to tackle than the others. We felt that it was doable at our level.
Can I recommend: https://github.com/knative/serving/issues/3415 instead?
Trying to implement part II, it turned out to be more hassle than benefit, at least the way things are done now (and the values seem to be different, since we always divide by the window size buckets, rather than # of buckets we recorded data, which is pretty different for new revisions).
/assign /milestone Serving 0.12
@vagababov: The provided milestone is not valid for this repository. Milestones in this repository: [Ice Box
, Needs Triage
, Serving "v1" (ready for production)
, Serving 0.12.x
, Serving 0.13.x
]
Use /milestone clear
to clear the milestone.
/milestone Serving 0.12.x
Done in:
Also: #6487, #6447, #6498
In what area(s)?
/area autoscale /kind proposal
Describe the feature
Currently when we compute average concurrency or average RPS over the stable window in the autoscaler we basically iterate the buckets and sum them. While the number of buckets is limited (60/2=30) by default, the stable window can be made quite large: hours, days... we don't have an upper limit right now (which might be another good issue)
While it is quite easy to improve to precompute current value and just update the stored counter.
total += t[now] - t[windowSecondsBefore]
This can be implemented with current
map
based solution for the buckets.But here comes part II: we should use a fixed memory circular buffer to store the scrape/activator returned values, thus limiting GC work that has to be done and in addition keeping the whole data in array, rather than in a map permits for data locality further improving the performance.
this changes the formula above to
/cc @markusthoemmes
Feature track: https://docs.google.com/document/d/1lFown9jjBSOEBieSG3kGA8-1_tAgdDxQpeo-jtgXPCE/edit#heading=h.yhdv39zgzec6
From @markusthoemmes & @vagababov meeting in Düsseldorf