elastic / cloud-on-k8s

Elastic Cloud on Kubernetes
Other
70 stars 708 forks source link

ECK operator gets 401 Unauthorized when trying to setup Fleet Server #6144

Open legoguy1000 opened 2 years ago

legoguy1000 commented 2 years ago

Bug Report

What did you do? See https://discuss.elastic.co/t/elastic-agent-fleet-setup-unauthorized/317406

We are deploying an ECK cluster on a bare metal k8s cluster. These clusters are short lived and are rebuilt many times so not a permanent environment. This issue is very inconsistent but a lot of times when we deploy, Elasticsearch and Kibana deploy via ECK with no issues BUT when I try to deploy fleet server, the pod for the fleet server agent is never created. When the operator tries to call the Kibana API to setup fleet it returns the below errors. First a bunch of 401s and then eventually it says timeouts.

I have found that most of the time if I delete the ECK operator pod, when the pod is recreated it eventually works and the Fleet Server pod is created. I don't have any issues with the regular agents deployed via ECK once the fleet server is deployed

Also this issue seems to be far more prevalent when I have ECK use a local offline docker registry and there is no access to the internet but IDK if that is just coincidence.

What did you expect to see? The Fleet server pod is created without issues. What did you see instead? Under which circumstances?

Environment

2.4.0 and 2.5.0 (current)

k8s BareMetal v1.23

$ kubectl version
taxilian commented 1 year ago

Did you ever figure this out? I'm seeing the same issue

ghost commented 1 year ago

I'm not sure what fixed it for me, but I:

Finally it somehow worked.

legoguy1000 commented 1 year ago

I believe its still an issue for however our workaround is to just delete the ECK Operator pod after deploying the Agent resource and then after the ECK operator pod is recreated, Fleet server and then the regular agents are created without issue. Once I upgrade to 2.6.x, i'll have to see if its still something we have to do.

taxilian commented 1 year ago

Huh; I finally found I had some bad ServiceAccount definitions (wrong namespace) which I fixed and then this issue went away, and the fleet server and agents all started up, but kibana doesn't seem to know that there is a fleet server. This is probably all just 'cause of stuff I don't understand, though, so I'll keep tinkering.

Richard

On Mon, Feb 6, 2023 at 3:02 AM Alex Resnick @.***> wrote:

I believe its still an issue for however our workaround is to just delete the ECK Operator pod after deploying the Agent resource and then after the ECK operator pod is recreated, Fleet server and then the regular agents are created without issue. Once I upgrade to 2.6.x, i'll have to see if its still something we have to do.

— Reply to this email directly, view it on GitHub https://github.com/elastic/cloud-on-k8s/issues/6144#issuecomment-1418817380, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABWYTQ2UNHPPVFCISOKJNDWWDD4TANCNFSM6AAAAAARXPELPA . You are receiving this because you commented.Message ID: @.***>

legoguy1000 commented 1 year ago

I"m testing upgrade to 2.6.1 and it seems to have resolved the issue

taxilian commented 1 year ago

I have been on the 2.6.1 the whole time; it does seem to eventually resolve but I see it each time. I did have to roll back to an older fleet server version (8.5.3 instead of 8.6.1) to get it to come up, though

legoguy1000 commented 1 year ago

I just use the RBAC configs straight from teh YAML from elastic so I don't seem to have an issue with Service Account stuff. IDK. I'm still seeing the 401s in the logs but it doesn't seem to be preventing Fleet server from coming up.

naemono commented 1 year ago

https://github.com/elastic/cloud-on-k8s/issues/6331 https://github.com/elastic/elastic-agent-autodiscover/issues/41

Fleet/Agent on 8.6.x is a known issue, as it working to be resolved by the Agent team.

When testing Fleet with ECK, the Fleet pod will restart a couple times on new installations as Kibana+Elasticsearch because fully healthy, but eventually this should succeed. If you continue to see issues once the above 2 issues are resolved, please feel free to re-open this issue. Thanks.

legoguy1000 commented 1 year ago

THis issue was present with 8.5.1 as well and ECK 2.5 and I don't see how the issues u linked related to this issue. THis issue is with the ECK operator, not with the agents themselves as there is no agent, thats the whole issue.

naemono commented 1 year ago

@legoguy1000 Perhaps I misunderstood the issue. I'll re-open and do some testing and update when I have more information.

naemono commented 1 year ago

@legoguy1000 Can we get your full Kibana manifest/yaml to try an reproduce this please?

also is there anything special about this certificate?

    tls:
      certificate:
        secretName: fleet-server-certificate

Also, are you bringing up ES/Kibana/Fleet all at the same time, or ES/Kibana first, then Fleet at a later date?

legoguy1000 commented 1 year ago

We use Ansible to deploy everything. First Elasticsearch is deployed and we wait until the cluster is green. Then Kibana, the logstash (via a regulard k8s deployment). Then we deploy fleet server and once fleet server is green, we deploy an agent daemonset. Our Kibana Ansible template

---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: {{ cluster_name }}
spec:
  version: {{ elastic_ver }}
  count: {{ kibana_nodes }}
  elasticsearchRef:
    name: {{ cluster_name }}
    serviceName: elasticsearch-coord
  http:
    tls:
      certificate:
        secretName: kibana-certificate
  secureSettings:
  - secretName: kibana-key-secret-settings
  - secretName: kibana-alert-secret-settings
  config:
    server.publicBaseUrl: https://kibana.{{ domain }}
    uiSettings:
      overrides:
        "doc_table:legacy": true
        "theme:darkMode": true
    telemetry.optIn: false
    telemetry.allowChangingOptInStatus: false
    monitoring.ui.container.elasticsearch.enabled: false
    monitoring.ui.ccs.enabled: false
    xpack.reporting.enabled: true
    elasticsearch.requestTimeout: 100000
    elasticsearch.shardTimeout: 0
    monitoring.kibana.collection.interval: 30000
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-ingest.default.svc.cluster.local:9200"]
    xpack.fleet.agents.fleet_server.hosts:
      - "https://fleet-server-agent-http.default.svc"
{% if kit_external_dns != '' %}
      - "https://{{ external_dns }}:6062"
{% endif %}
{% if kit_external_ip != '' and kit_external_dns == '' %}
      - "https://{{ external_ip }}:6062"
{% endif %}
{% if configure_for_offline %}
    xpack.fleet.registryUrl: "http://package-registry.default.svc:8080"
{% endif %}
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: pfsense
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        is_managed: true
        namespace: default
        unenroll_timeout: 3600
        monitoring_enabled: []
        #   - logs
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Default Agent
        id: eck-agent
        namespace: default
        monitoring_enabled: []
        #   - logs
        #   - metrics
        unenroll_timeout: 1800
        is_default: true
        package_policies: []
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: NEWSFEED_ENABLED
          value: "false"
        - name: NODE_OPTIONS
          value: "--max-old-space-size={{ (kibana_memory * 1024 / 2) | int }}"
        - name: SERVER_MAXPAYLOAD
          value: "2097152"
        resources:
          requests:
            memory: {{ kibana_memory }}Gi
            cpu: {{ kibana_cpu }}
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              common.k8s.elastic.co/type: "kibana"

The certificate is just a plain server certificate issued by Cert Manager via a self signed internal CA

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: fleet-server
  namespace: default
spec:
  # Secret names are always required.
  secretName: fleet-server-certificate
  duration: {{ certmanager.default_cert_length }}
  renewBefore: {{ certmanager.default_cert_renewal }}
  commonName: fleet-server
  subject:
   organizations:
   - "{{ domain }}"
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - server auth
  dnsNames:
  - fleet-server
  - fleet-server.{{ domain }}
  - fleet-server.{{ fqdn }}
  - fleet-server-agent-http.default.svc
{% if external_dns != '' %}
  - {{ external_dns }}
{% endif %}
{% if external_ip != '' %}
  ipAddresses:
    - {{ external_ip }}
{% endif %}
  issuerRef:
    name: "{{ certmanager.ca_issuer }}"
    kind: Issuer
    group: cert-manager.io
naemono commented 1 year ago

@legoguy1000 So I tested this again, and here's what I saw:

  1. Laid down es manifest, and it became green
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: testing
spec:
  version: 8.6.0
  nodeSets:
    - name: masters
      count: 3
      config:
        node.roles: ["master", "data"]
        node.store.allow_mmap: false
      podTemplate:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
            fsGroup: 1000
  1. Laid down Kibana manifest, and it became healthy (tried to mirror yours as much as possible)
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.6.0
  count: 1
  config:
    uiSettings:
      overrides:
        "doc_table:legacy": true
        "theme:darkMode": true
    telemetry.optIn: false
    telemetry.allowChangingOptInStatus: false
    monitoring.ui.container.elasticsearch.enabled: false
    monitoring.ui.ccs.enabled: false
    xpack.reporting.enabled: true
    elasticsearch.requestTimeout: 100000
    elasticsearch.shardTimeout: 0
    monitoring.kibana.collection.interval: 30000
    xpack.fleet.agents.elasticsearch.host: "https://testing-es-http.default.svc:9200"
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-agent-http.default.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: pfsense
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        is_managed: true
        namespace: default
        unenroll_timeout: 3600
        monitoring_enabled: []
        #   - logs
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Default Agent
        id: eck-agent
        namespace: default
        monitoring_enabled: []
        #   - logs
        #   - metrics
        unenroll_timeout: 1800
        is_default: true
        package_policies: []
  elasticsearchRef:
    name: testing
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: NEWSFEED_ENABLED
          value: "false"
        - name: SERVER_MAXPAYLOAD
          value: "2097152"
  1. Laid down fleet server manifest
    apiVersion: agent.k8s.elastic.co/v1alpha1
    kind: Agent
    metadata:
    name: fleet-server
    namespace: default
    spec:
    version: 8.6.0
    kibanaRef:
    name: kibana
    elasticsearchRefs:
    - name: testing
    mode: fleet
    fleetServerEnabled: true
    deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: fleet-server
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: fleet-server
    rules:
    - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - nodes
    verbs:
      - get
      - watch
      - list
    - apiGroups: ["coordination.k8s.io"]
    resources:
      - leases
    verbs:
      - get
      - create
      - update
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: fleet-server
    namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: fleet-server
    subjects:
    - kind: ServiceAccount
    name: fleet-server
    namespace: default
    roleRef:
    kind: ClusterRole
    name: fleet-server
    apiGroup: rbac.authorization.k8s.io

Upon laying down the fleet server manifest, I see the 401 errors in the operator logs

{"log.level":"error","@timestamp":"2023-02-20T15:25:40.959Z","log.logger":"manager.eck-operator","message":"Reconciler error","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","controller":"agent-controller","object":{"name":"fleet-server","namespace":"default"},"namespace":"default","name":"fleet-server","reconcileID":"9042b248-a3a6-451f-959f-b9a8bc798937","error":"failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)","errorCauses":[{"error":"failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)"}],"error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234"}
{"log.level":"debug","@timestamp":"2023-02-20T15:25:40.959Z","log.logger":"manager.eck-operator.events","message":"Reconciliation error: failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","type":"Warning","object":{"kind":"Agent","namespace":"default","name":"fleet-server","uid":"655e4eee-d49b-48bf-b8f3-f0d4f2ef7917","apiVersion":"agent.k8s.elastic.co/v1alpha1","resourceVersion":"329734023"},"reason":"ReconciliationError"}

This is expected, as some "association" credentials are being reconciled to the ES instance, which takes some time, and in my case, eventually succeed:

{"log.level":"debug","@timestamp":"2023-02-20T15:26:00.337Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"POST","url":"https://kibana-kb-http.default.svc:5601/api/fleet/setup"}
{"log.level":"debug","@timestamp":"2023-02-20T15:26:01.195Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"GET","url":"https://kibana-kb-http.default.svc:5601/api/fleet/agent_policies?perPage=20&page=1"}
{"log.level":"debug","@timestamp":"2023-02-20T15:26:01.254Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"GET","url":"https://kibana-kb-http.default.svc:5601/api/fleet/enrollment_api_keys?perPage=20&page=1"}

NOTE that it does take a couple minutes for the agent pod to show up in the namespace, but it eventually does show up, and becomes healthy without any intervention from myself.

And I see the agent green

❯ kc get agent -n default
NAME           HEALTH   AVAILABLE   EXPECTED   VERSION   AGE
fleet-server   green    1           1          8.6.0     17m

And the pod running

❯ kc get pod -n default -l common.k8s.elastic.co/type=agent
NAME                                  READY   STATUS    RESTARTS   AGE
fleet-server-agent-75c649c684-x9hfh   1/1     Running   0          17m

Now I am curious about the actual values in this block when things fail for you, as you're behind a cloud loadbalancer:

    xpack.fleet.agents.fleet_server.hosts:
      - "https://fleet-server-agent-http.default.svc"
{% if kit_external_dns != '' %}
      - "https://{{ external_dns }}:6062"
{% endif %}
{% if kit_external_ip != '' and kit_external_dns == '' %}
      - "https://{{ external_ip }}:6062"
{% endif %}
{% if configure_for_offline %}

Could you possibly run this eck-diagnostics tool when things are in this state so we can get a full view on the state of things?

legoguy1000 commented 1 year ago

So that may be difficult as once I upgraded to eck 2.6.1 I haven't seen the issue anymore. I'm able to deploy fleet server and it comes up within a min or so without any action on my part. The template values are just internal and external IPs for the various agents inside and outside the k8s cluster.

pochingliu131 commented 1 year ago

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

pochingliu131 commented 1 year ago

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

finally I found that my kibana config security setting, if I remove this setting then this error no longer occurs.

legoguy1000 commented 1 year ago

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

finally I found that my kibana config security setting, if I remove this setting then this error no longer occurs.

What setting?

naemono commented 1 year ago

Ran my above manifests with 2.5.0 of ECK operator, and had the exact same results. Eventually worked and all became healthy with no interaction from me. Will eventual close this if no more configuration details come to light from @totoroliu0131

gbschenkel commented 1 year ago

I am still seeing this issue with ECK Operator 2.6.2 and using ELK 8.6.2. I am hosting on Openshift 4.11. I saw ECK was bumped to 2.7.0 but it is not available yet on our Openshift instance, maybe is not published as certified yet.

naemono commented 1 year ago

@gbschenkel 2.7.0 is now available. There was an issue with certified release on the RedHat side of things. If you have a similar issue to this, please give full details as to your configuration, including full reproducible manifests so we can replicate the problem.

Thank you

Esakki1211 commented 1 year ago

Hi All, I have similar 401 issue, I have deployed Elasticsearch and Kibana (8.9.1) in my K8s cluster.. and after that I'm trying to install custom resource operator, following this doc https://betterprogramming.pub/managing-elasticsearch-resources-in-kubernetes-39b697908f4e

helm install eck-cr eck-custom-resources/eck-custom-resources-operator - this line created the operator pod and it's healthy..

but when create this index-template yaml and create/apply I'm getting 401 unauthorized error..

I changed the elasticsearch URL details in the values.yaml file, and updated the secrets also.. I have cross checked the secret I used is correct, though it's not working.. any suggestions?

QuinnBast commented 3 months ago

I found the solution (for me).

I was copying the example manifests here but all of them use:

namespace: default

I just updated the manifests to use the namespace I deployed to and everything started working wonderfully on ECK-operator version 2.14.0