wbuchwalter / Kubernetes-acs-engine-autoscaler

[Deprecated] Node-level autoscaler for Kubernetes clusters created with acs-engine.
Other
71 stars 22 forks source link

Unexpected error: <class 'requests.exceptions.HTTPError'> #89

Closed markwragg closed 8 months ago

markwragg commented 6 years ago

I am trying to use Helm to provision the autoscaler. The container starts, but repeatedly returns:

autoscaler.cluster - WARNING - Unexpected error: <class 'requests.exceptions.HTTPError'>

I've read the two issues where this error was previously reported but they haven't helped unfortunately.

This is my (redacted) values.yaml:

replicaCount: 1
## Image for kubernetes-acs-engine-autoscaler
## Will update the image and tag later
image:
  repository: wbuchwalter/kubernetes-acs-engine-autoscaler
  tag: 2.1.1
  pullPolicy: Always
rbac:
  install: true
  apiVersion: v1beta1
acsenginecluster:
  resourcegroup: redacted
  azurespappid: redacted
  azurespsecret: redacted
  azuresptenantid: redacted
  kubeconfigprivatekey: redacted
  subscriptionid: redacted
  clientprivatekey: redacted
  caprivatekey: redacted
  acsdeployment: redacted

If I change tag to latest the container doesn't run and I get a message indicating that I didn't supply the required Azure settings. This doesn't happen with tag is 2.1.1 as above.

I've tried also adding etcdClientPrivateKey and etcdServerPrivateKey but this doesn't seem to help. Any idea what i'm doing wrong?

Many Thanks, Mark

alexbeloi commented 6 years ago

I seemed to be having the same issue when deploying the stable helm-chart

deploying via helm install stable/acs-engine-autoscaler -f values.yaml

tag: 2.1.1 in values.yaml

2018-05-15 23:56:30,287 - autoscaler.cluster - DEBUG - Using kube service account
2018-05-15 23:56:30,288 - autoscaler.cluster - INFO - ++++ Running Scaling Loop ++++++
2018-05-15 23:56:30,333 - autoscaler.cluster - WARNING - Unexpected error: &lt;class 'requests.exceptions.HTTPError'&gt;
2018-05-15 23:56:30,334 - autoscaler - WARNING - backoff: 60 

tag: latest in values.yaml

2018-05-16 00:03:12,667 - autoscaler - ERROR - Missing Azure credentials. Please provide service_principal_app_id, service_principal_secret, service_principal_tenant_id and subscription_id.

if run locally from this repo

$ docker build -t autoscaler .
$ ./devenvh.sh
#in the container
# python main.py \
--resource-group redacted \
--acs-deployment redacted \
--service-principal-app-id 'redacted' \
--service-principal-secret 'redacted' \
--service-principal-tenant-id 'redacted' \
--subscription-id 'redacted' \
--debug \
--kubeconfig /root/.kube/config \
--client-private-key 'redacted' \
--ca-private-key 'redacted'

it seems to work

2018-05-16 00:17:42,348 - autoscaler.cluster - INFO - ++++ Running Scaling Loop ++++++
2018-05-16 00:17:42,348 - autoscaler.cluster - INFO - Debug mode is on
2018-05-16 00:17:43,064 - autoscaler.cluster - INFO - Pods to schedule: 0
2018-05-16 00:17:43,064 - autoscaler.cluster - INFO - ++++ Scaling Up Begins ++++++
2018-05-16 00:17:43,065 - autoscaler.cluster - INFO - Nodes: 4
2018-05-16 00:17:43,065 - autoscaler.cluster - INFO - To schedule: 0
2018-05-16 00:17:43,065 - autoscaler.cluster - INFO - Pending pods: 0
2018-05-16 00:17:43,066 - autoscaler.cluster - INFO - ++++ Scaling Up Ends ++++++
2018-05-16 00:17:43,066 - autoscaler.cluster - INFO - ++++ Maintenance Begins ++++++
2018-05-16 00:17:43,066 - autoscaler.engine_scaler - INFO - ++++ Maintaining Nodes ++++++
2018-05-16 00:17:43,067 - autoscaler.engine_scaler - INFO - node: k8s-agentpool1-41030516-0                                                   state: spare-agent
2018-05-16 00:17:43,068 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-0                                                   state: busy
2018-05-16 00:17:43,069 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-1                                                   state: busy
2018-05-16 00:17:43,069 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-2                                                   state: busy
2018-05-16 00:17:43,069 - autoscaler.cluster - INFO - ++++ Maintenance Ends ++++++

Deploying repo helm-chart

Install repo helm chart instead of stable/acs-engine-autoscaler.

2018-05-16 00:26:48,577 - autoscaler.cluster - DEBUG - Using kube service account
2018-05-16 00:26:48,578 - autoscaler.cluster - INFO - ++++ Running Scaling Loop ++++++
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - Pods to schedule: 0
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - ++++ Scaling Up Begins ++++++
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - Nodes: 4
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - To schedule: 0
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - Pending pods: 0
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - ++++ Scaling Up Ends ++++++
2018-05-16 00:26:48,645 - autoscaler.cluster - INFO - ++++ Maintenance Begins ++++++
2018-05-16 00:26:48,646 - autoscaler.engine_scaler - INFO - ++++ Maintaining Nodes ++++++
2018-05-16 00:26:48,646 - autoscaler.engine_scaler - INFO - node: k8s-agentpool1-41030516-0                                                   state: spare-agent
2018-05-16 00:26:48,647 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-0                                                   state: busy
2018-05-16 00:26:48,647 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-1                                                   state: busy
2018-05-16 00:26:48,647 - autoscaler.engine_scaler - INFO - node: k8s-agentpool2-41030516-2                                                   state: busy
2018-05-16 00:26:48,647 - autoscaler.cluster - INFO - ++++ Maintenance Ends ++++++ 

I guess it's not unexpected that the stable helm-chart isn't working with latest image, but it's strange that it's not working with tag: 2.1.1.