pusher / k8s-spot-rescheduler

Tries to move K8s Pods from on-demand to spot instances
Apache License 2.0
311 stars 42 forks source link

Docs out of date for v0.3.0 #67

Open cep21 opened 4 years ago

cep21 commented 4 years ago

Docs mention node-role.kubernetes.io/worker=true, but the code for v0.3.0 expects kubernetes.io/role=worker by default (https://github.com/pusher/k8s-spot-rescheduler/blob/v0.3.0/nodes/nodes.go#L31)

Also the example deployment https://github.com/pusher/k8s-spot-rescheduler/blob/master/deploy/deployment.yaml#L31 could change the block

          command:
            - rescheduler
            - -v=2
            - --running-in-cluster=true
            - --namespace=kube-system
            - --housekeeping-interval=10s
            - --node-drain-delay=10m
            - --pod-eviction-timeout=2m
            - --max-graceful-termination=2m
            - --listen-address=0.0.0.0:9235
            - --on-demand-node-label=node-role.kubernetes.io/worker
            - --spot-node-label=node-role.kubernetes.io/spot-worker

into

          args:
            - -v=2
            - --running-in-cluster=true
            - --namespace=kube-system
            - --housekeeping-interval=10s
            - --node-drain-delay=10m
            - --pod-eviction-timeout=2m
            - --max-graceful-termination=2m
            - --listen-address=0.0.0.0:9235
            - --on-demand-node-label=node-role.kubernetes.io/worker
            - --spot-node-label=node-role.kubernetes.io/spot-worker

to both resolve the issue with the binary being renamed (away from rescheduler see https://github.com/pusher/k8s-spot-rescheduler/issues/64#issuecomment-535571362) and to be resistant from future binary renames.

theobarberbany commented 4 years ago

@cep21 Great spot! Thanks for bringing this up. I'll try getting to it this afternoon, unless you fancy opening a PR?