Closed vadasambar closed 1 year ago
nameserver
in any pod's /etc/resolv.conf
and cluster IP of the kube-dns service (they should be matching) https://github.com/vadafoss/daily-updates/issues/3#issuecomment-1494169524fix tests failing for https://github.com/kubernetes/autoscaler/issues/4231
test the PR in a GKE cluster https://github.com/kubernetes/autoscaler/pull/5594 and ask for review again
check where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
revisit test case changes for IgnoreDaemonSetsUtilization
made here: https://github.com/kubernetes/autoscaler/pull/5672/files#diff-e7f41c366e8f9e299ef7f726cb60c9fe2e6943fcdb6f2945a0df18d27f6bd015R120-R139
IgnoreDaemonSetsUtilization
fully because the pods passed to the test case are normal pods and not DaemonSet podscheck where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
scaleDownNodeToReport
is called
loop back on https://github.com/kubernetes/autoscaler/pull/5594 asking for review on 12th Apr 2023
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
loop back on https://github.com/kubernetes/autoscaler/pull/5594 asking for review on 12th Apr 2023
replied to https://github.com/kubernetes/autoscaler/issues/5657#issuecomment-1506333162
replied to https://github.com/kubernetes/autoscaler/issues/5668#issuecomment-1506323290
revisit test case changes for IgnoreDaemonSetsUtilization
made here: https://github.com/kubernetes/autoscaler/pull/5672/files#diff-e7f41c366e8f9e299ef7f726cb60c9fe2e6943fcdb6f2945a0df18d27f6bd015R120-R139
IgnoreDaemonSetsUtilization
fully because the pods passed to the test case are normal pods and not DaemonSet podscheck where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
check where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
scaleDownNodeToReport
is called
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
respond to https://github.com/kubernetes/autoscaler/issues/5657#issuecomment-1506520638
check where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
scaleDownNodeToReport
is called
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
address last comment before merge for https://github.com/kubernetes/autoscaler/pull/5594#pullrequestreview-1385626431
/lgtm
was removed after I updated the PR. Asking for /lgtm
again from Kuba here: https://github.com/kubernetes/autoscaler/pull/5594#issuecomment-1510720512respond to https://github.com/kubernetes/autoscaler/issues/5657#issuecomment-1506520638
check where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
scaleDownNodeToReport
is called
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
scaleDownNodeToReport
is called
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
respond to new comment on my question around Is there any plan to support running out-of-tree scheduler plugins as separate pods?
on k8s slack: https://kubernetes.slack.com/archives/C09TP78DV/p1681826504549139?thread_ts=1681325345.961669&cid=C09TP78DV
check for scope to add tests where scaleDownNodeToReport
is called
Filter
function in the interface but the Filter
here is not the same as the PreFilter
and PostFilter
extension points in scheduling framework. The Filter
here is a filter in a more general sense (basically, which nodes should be filtered). NewSchedulerCommand
-> runCommand
-> Run
-> -> scheduler.Run
-> scheduleOne
-> schedulingCycle
-> -> schedulePod
-> findNodesThatFitPod
-> -> findNodesThatPassExtenders
-> -> extender.Filter
scaleDownNodeToReport
is called
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
Found the culprit. It's happening in the scheme's library.
File: ../../../go/pkg/mod/k8s.io/kubernetes@v1.27.0-alpha.1/pkg/scheduler/apis/config/v1/default_plugins.go
29: // getDefaultPlugins returns the default set of plugins.
30: func getDefaultPlugins() *v1.Plugins {
31: plugins := &v1.Plugins{
32: MultiPoint: v1.PluginSet{
33: Enabled: []v1.Plugin{
34: {Name: names.PrioritySort},
35: {Name: names.NodeUnschedulable},
36: {Name: names.NodeName},
37: {Name: names.TaintToleration, Weight: pointer.Int32(3)},
38: {Name: names.NodeAffinity, Weight: pointer.Int32(2)},
39: {Name: names.NodePorts},
40: {Name: names.NodeResourcesFit, Weight: pointer.Int32(1)},
41: {Name: names.VolumeRestrictions},
42: {Name: names.EBSLimits},
43: {Name: names.GCEPDLimits},
44: {Name: names.NodeVolumeLimits},
45: {Name: names.AzureDiskLimits},
46: {Name: names.VolumeBinding},
47: {Name: names.VolumeZone},
48: {Name: names.PodTopologySpread, Weight: pointer.Int32(2)},
49: {Name: names.InterPodAffinity, Weight: pointer.Int32(2)},
50: {Name: names.DefaultPreemption},
51: {Name: names.NodeResourcesBalancedAllocation, Weight: pointer.Int32(1)},
52: {Name: names.ImageLocality, Weight: pointer.Int32(1)},
53: {Name: names.DefaultBinder},
54: },
55: },
56: }
File: pkg/scheduler/apis/config/v1/defaults.go
69: func setDefaults_KubeSchedulerProfile(logger klog.Logger, prof *configv1.KubeSchedulerProfile) {
70: // Set default plugins.
71: prof.Plugins = mergePlugins(logger, getDefaultPlugins(), prof.Plugins)
72: // Set default plugin configs.
File: pkg/scheduler/apis/config/v1/defaults.go
104: // SetDefaults_KubeSchedulerConfiguration sets additional defaults
105: func SetDefaults_KubeSchedulerConfiguration(obj *configv1.KubeSchedulerConfiguration) {
...
119:
120: // Add the default set of plugins and apply the configuration.
121: for i := range obj.Profiles {
122: prof := &obj.Profiles[i]
123: setDefaults_KubeSchedulerProfile(logger, prof)
124: }
File: pkg/scheduler/apis/config/v1/zz_generated.defaults.go
29: // RegisterDefaults adds defaulters functions to the given scheme.
30: // Public to allow building arbitrary schemes.
31: // All generated defaulters are covering - they call all nested defaulters.
32: func RegisterDefaults(scheme *runtime.Scheme) error {
...
35: scheme.AddTypeDefaultingFunc(&v1.KubeSchedulerConfiguration{}, func(obj interface{}) {
36: SetObjectDefaults_KubeSchedulerConfiguration(obj.(*v1.KubeSchedulerConfiguration))
37: })
...
45: }
...
55: func SetObjectDefaults_KubeSchedulerConfiguration(in *v1.KubeSchedulerConfiguration) {
56: SetDefaults_KubeSchedulerConfiguration(in)
57: }
File: pkg/scheduler/apis/config/v1/defaults.go
37: func addDefaultingFuncs(scheme *runtime.Scheme) error {
38: return RegisterDefaults(scheme)
39: }
File: pkg/scheduler/apis/config/v1/register.go
41: localSchemeBuilder.Register(addDefaultingFuncs)
File: pkg/scheduler/apis/config/scheme/scheme.go
File: ../../../go/pkg/mod/k8s.io/kubernetes@v1.27.0-alpha.1/pkg/scheduler/apis/config/scheme/scheme.go
29: var (
30: // Scheme is the runtime.Scheme to which all kubescheduler api types are registered.
31: Scheme = runtime.NewScheme()
32:
33: // Codecs provides access to encoding and decoding for the scheme.
34: Codecs = serializer.NewCodecFactory(Scheme, serializer.EnableStrict)
35: )
36:
37: func init() {
38: AddToScheme(Scheme)
39: }
40:
41: // AddToScheme builds the kubescheduler scheme using all known versions of the kubescheduler api.
42: func AddToScheme(scheme *runtime.Scheme) {
43: utilruntime.Must(config.AddToScheme(scheme))
44: utilruntime.Must(configv1beta2.AddToScheme(scheme))
45: utilruntime.Must(configv1beta3.AddToScheme(scheme))
46: utilruntime.Must(configv1.AddToScheme(scheme))
47: utilruntime.Must(scheme.SetVersionPriority(
48: configv1.SchemeGroupVersion,
49: configv1beta3.SchemeGroupVersion,
50: configv1beta2.SchemeGroupVersion,
51: ))
52: }
reply to slack thread around "does scheduler support csi driver like plugins" in #sig-scheduling
check where else I can add test cases for https://github.com/kubernetes/autoscaler/pull/5672
check for scope to add tests where scaleDownNodeToReport
is called
fix test cases failing for https://github.com/kubernetes/autoscaler/pull/5672
start working on a new CA issue
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
fix test cases failing for https://github.com/kubernetes/autoscaler/pull/5672
start working on a new CA issue
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
tweeted about scheduler extenders: https://twitter.com/_vadasambar/status/1651811082862997504
working on a blogpost
come up with a way to test https://github.com/kubernetes/autoscaler/pull/5708
check if validating the config is really required for https://github.com/kubernetes/autoscaler/pull/5708
look at the behavior of scheduler when it fails to reach extender and replicate it in https://github.com/kubernetes/autoscaler/pull/5708
apply for Kubernetes community membership: https://github.com/kubernetes/community/blob/master/community-membership.md
respond to new comment in https://github.com/kubernetes/autoscaler/issues/4231
https://github.com/kubernetes/autoscaler/issues/4231
check https://github.com/kubernetes/autoscaler/issues/5566#event-8829893947
check comment https://github.com/kubernetes/autoscaler/issues/5377#issuecomment-1480601136
check the Kanban
Last month's thread: Mar 2023: https://github.com/vadafoss/daily-updates/issues/7
What
This is an issue to update daily status on what I am doing as a member of vadafoss community.
How
Follow this format
やったこと: yatta-koto lit. 'things I did' in Japanese (I just find it sounds closer to what I want to say) Problem (optional): problems I faced Try (optional): what I am trying TODO (optional): things to do WIP (optional): work in progress