open-telemetry / opentelemetry-operator

Kubernetes Operator for OpenTelemetry Collector
Apache License 2.0
1.18k stars 418 forks source link

Target allocator does not show jobs #2267

Closed 0hag1 closed 11 months ago

0hag1 commented 11 months ago

Component(s)

target allocator

What happened?

Description

When I access the site at localhost:8888/jobs, I only get otel-collector for jobs. (localhost:8888 is port mapped)

scrape_configs is now visible

image

image

Steps to Reproduce

OTel Collector YAML

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  labels:
    app.kubernetes.io/managed-by: opentelemetry-operator
    argocd.argoproj.io/instance: opentelemetry-collector
  name: otel
  namespace: opentelemetry-collector
spec:
  autoscaler:
    behavior:
      scaleDown:
        stabilizationWindowSeconds: 15
      scaleUp:
        stabilizationWindowSeconds: 1
    maxReplicas: 2
    minReplicas: 1
    targetCPUUtilization: 60
  config: |
    exporters:
      otlp:
        auth:
          authenticator: basicauth/tempo
        endpoint: ${env:TEMPO_ENDPOINT}
      prometheusremotewrite:
        auth:
          authenticator: basicauth/prometheus_remote_write
        endpoint: ${env:PROMETHEUS_REMOTE_WRITE_ENDPOINT}
    extensions:
      basicauth/prometheus_remote_write:
        client_auth:
          password: ${env:PROMETHEUS_REMOTE_WRITE_PASSWORD}
          username: ${env:PROMETHEUS_REMOTE_WRITE_USERNAME}
      basicauth/tempo:
        client_auth:
          password: ${env:TEMPO_PASSWORD}
          username: ${env:TEMPO_USERNAME}
      health_check: null
    processors:
      batch:
        send_batch_size: 10000
        timeout: 10s
      memory_limiter:
        check_interval: 1s
        limit_percentage: 50
        spike_limit_percentage: 15
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            cors:
              allowed_headers:
              - '*'
              allowed_origins:
              - '*'
            endpoint: 0.0.0.0:4318
      prometheus:
        config:
          scrape_configs:
          - job_name: otel-collector
            scrape_interval: 10s
            static_configs:
            - targets:
              - 0.0.0.0:8888
        target_allocator:
          collector_id: ${POD_NAME}
          endpoint: http://otel-targetallocator
          interval: 10s
      zipkin:
        endpoint: 0.0.0.0:9411
    service:
      extensions:
      - health_check
      - basicauth/tempo
      - basicauth/prometheus_remote_write
      pipelines:
        metrics:
          exporters:
          - prometheusremotewrite
          receivers:
          - prometheus
        traces:
          exporters:
          - otlp
          receivers:
          - otlp
          - zipkin
      telemetry:
        logs:
          level: debug
  env:
    - name: TEMPO_ENDPOINT
      valueFrom:
        secretKeyRef:
          key: ENDPOINT
          name: tempo
    - name: TEMPO_USERNAME
      valueFrom:
        secretKeyRef:
          key: USER_ID
          name: tempo
    - name: TEMPO_PASSWORD
      valueFrom:
        secretKeyRef:
          key: PASSWORD
          name: tempo
    - name: PROMETHEUS_REMOTE_WRITE_ENDPOINT
      valueFrom:
        secretKeyRef:
          key: ENDPOINT
          name: prometheus-remote-write
    - name: PROMETHEUS_REMOTE_WRITE_USERNAME
      valueFrom:
        secretKeyRef:
          key: USER_ID
          name: prometheus-remote-write
    - name: PROMETHEUS_REMOTE_WRITE_PASSWORD
      valueFrom:
        secretKeyRef:
          key: PASSWORD
          name: prometheus-remote-write
  ingress:
    route: {}
  maxReplicas: 2
  minReplicas: 1
  mode: statefulset
  observability:
    metrics: {}
  podDisruptionBudget:
    minAvailable: 1
  replicas: 1
  resources:
    limits:
      cpu: '1'
      memory: 1Gi
    requests:
      cpu: 100m
      memory: 128Mi
  serviceAccount: otel-collector
  targetAllocator:
    enabled: true
    prometheusCR:
      enabled: true
      scrapeInterval: 30s
    replicas: 1
    resources: {}
    serviceAccount: otel-collector
  upgradeStrategy: automatic
status:
  image: 'otel/opentelemetry-collector-contrib:0.87.0'
  scale:
    replicas: 1
    selector: >-
      app.kubernetes.io/component=opentelemetry-collector,app.kubernetes.io/instance=opentelemetry-collector.otel,app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/name=otel-collector,app.kubernetes.io/part-of=opentelemetry,app.kubernetes.io/version=latest,argocd.argoproj.io/instance=opentelemetry-collector
    statusReplicas: 1/1
  version: 0.87.0

Expected Result

The metrics that serviceMonitor, podMonitor is enabled for jobs are to be displayed

Actual Result

Kubernetes Version

1.27

Operator version

v0.87.0

Collector version

v0.87.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2")

Log output

target allocator logs

{"level":"info","ts":"2023-10-24T09:16:39Z","msg":"Starting the Target Allocator"}{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"allocator","msg":"Unrecognized filter strategy; filtering disabled"}{"level":"info","ts":"2023-10-24T09:16:39Z","msg":"Waiting for caches to sync for servicemonitors\n"}{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"allocator","msg":"Starting server..."}{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}{"level":"info","ts":"2023-10-24T09:16:39Z","msg":"Caches are synced for servicemonitors\n"}{"level":"info","ts":"2023-10-24T09:16:39Z","msg":"Waiting for caches to sync for podmonitors\n"}{"level":"info","ts":"2023-10-24T09:16:39Z","msg":"Caches are synced for podmonitors\n"}{"level":"info","ts":"2023-10-24T09:31:39Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}

otel collector logs

2023-10-24T09:17:00.378Z    info    service@v0.87.0/telemetry.go:84 Setting up own telemetry...
2023-10-24T09:17:00.378Z    info    service@v0.87.0/telemetry.go:201    Serving Prometheus metrics  {"address": ":8888", "level": "Basic"}
2023-10-24T09:17:00.379Z    debug   extension@v0.87.0/extension.go:154  Beta component. May change in the future.   {"kind": "extension", "name": "health_check"}
2023-10-24T09:17:00.379Z    debug   extension@v0.87.0/extension.go:154  Beta component. May change in the future.   {"kind": "extension", "name": "basicauth/tempo"}
2023-10-24T09:17:00.379Z    debug   extension@v0.87.0/extension.go:154  Beta component. May change in the future.   {"kind": "extension", "name": "basicauth/prometheus_remote_write"}
2023-10-24T09:17:00.379Z    debug   exporter@v0.87.0/exporter.go:273    Stable component.   {"kind": "exporter", "data_type": "traces", "name": "otlp"}
2023-10-24T09:17:00.379Z    debug   exporter@v0.87.0/exporter.go:273    Beta component. May change in the future.   {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite"}
2023-10-24T09:17:00.379Z    debug   receiver@v0.87.0/receiver.go:294    Beta component. May change in the future.   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:17:00.379Z    debug   receiver@v0.87.0/receiver.go:294    Stable component.   {"kind": "receiver", "name": "otlp", "data_type": "traces"}
2023-10-24T09:17:00.379Z    debug   receiver@v0.87.0/receiver.go:294    Beta component. May change in the future.   {"kind": "receiver", "name": "zipkin", "data_type": "traces"}
2023-10-24T09:17:00.380Z    info    service@v0.87.0/service.go:143  Starting otelcol-contrib... {"Version": "0.87.0", "NumCPU": 2}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:33 Starting extensions...
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:36 Extension is starting...    {"kind": "extension", "name": "health_check"}
2023-10-24T09:17:00.380Z    info    healthcheckextension@v0.87.0/healthcheckextension.go:35 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"ResponseHeaders":null,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2023-10-24T09:17:00.380Z    warn    internal@v0.87.0/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "extension", "name": "health_check", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:43 Extension started.  {"kind": "extension", "name": "health_check"}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:36 Extension is starting...    {"kind": "extension", "name": "basicauth/tempo"}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:43 Extension started.  {"kind": "extension", "name": "basicauth/tempo"}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:36 Extension is starting...    {"kind": "extension", "name": "basicauth/prometheus_remote_write"}
2023-10-24T09:17:00.380Z    info    extensions/extensions.go:43 Extension started.  {"kind": "extension", "name": "basicauth/prometheus_remote_write"}
2023-10-24T09:17:00.380Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] Channel created {"grpc_log": true}
2023-10-24T09:17:00.380Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] original dial target is: "tempo-eu-west-0.grafana.net:443"  {"grpc_log": true}
2023-10-24T09:17:00.380Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] parsed dial target is: {URL:{Scheme:tempo-eu-west-0.grafana.net Opaque:443 User: Host: Path: RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}    {"grpc_log": true}
2023-10-24T09:17:00.380Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] fallback to scheme "passthrough"    {"grpc_log": true}
2023-10-24T09:17:00.380Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] parsed dial target is: {URL:{Scheme:passthrough Opaque: User: Host: Path:/tempo-eu-west-0.grafana.net:443 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}   {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] Channel switches to new LB policy "pick_first"  {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [pick-first-lb 0x4002b45680] Received new config {
  "shuffleAddressList": false
}, resolver state {
  "Addresses": [
    {
      "Addr": "tempo-eu-west-0.grafana.net:443",
      "ServerName": "",
      "Attributes": null,
      "BalancerAttributes": null,
      "Metadata": null
    }
  ],
  "Endpoints": [
    {
      "Addresses": [
        {
          "Addr": "tempo-eu-west-0.grafana.net:443",
          "ServerName": "",
          "Attributes": null,
          "BalancerAttributes": null,
          "Metadata": null
        }
      ],
      "Attributes": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
}   {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1 SubChannel #2] Subchannel created    {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] Channel Connectivity change to CONNECTING   {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING  {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1 SubChannel #2] Subchannel picks a new address "tempo-eu-west-0.grafana.net:443" to connect   {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:105  Starting target allocator discovery {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:17:00.381Z    info    zapgrpc/zapgrpc.go:178  [core] [pick-first-lb 0x4002b45680] Received SubConn state update: 0x4002b45830, {ConnectivityState:CONNECTING ConnectionError:<nil>}   {"grpc_log": true}
2023-10-24T09:17:00.381Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:239  Starting discovery manager  {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:17:00.382Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/external-dns/external-dns/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/knative-serving/controller/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/knative-serving/webhook/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "otel-collector"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "podMonitor/knative-serving/webhook/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/argo-workflows/argo-workflows-workflow-controller/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/contour-external/contour-external-envoy/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/contour-gateway/contour-gateway-envoy/0"}
2023-10-24T09:17:00.411Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:230  Scrape job added    {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/kube-state-metrics/kube-state-metrics/1"}
2023-10-24T09:17:00.412Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/0", "subs": "map[serviceMonitor/kube-system/aws-load-balancer-controller/0:{}]"}
2023-10-24T09:17:00.413Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/1", "subs": "map[serviceMonitor/contour-gateway/contour-gateway-contour/0:{}]"}
2023-10-24T09:17:00.413Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/2", "subs": "map[serviceMonitor/karpenter/karpenter/0:{}]"}
2023-10-24T09:17:00.413Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/3", "subs": "map[serviceMonitor/contour-internal/contour-internal-envoy/0:{}]"}
2023-10-24T09:17:00.413Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/4", "subs": "map[serviceMonitor/opentelemetry-operator/opentelemetry-operator/0:{}]"}
2023-10-24T09:17:00.413Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/5", "subs": "map[podMonitor/knative-serving/webhook/0:{}]"}
2023-10-24T09:17:00.414Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/6", "subs": "map[serviceMonitor/kube-system/cilium-agent/0:{}]"}
2023-10-24T09:17:00.414Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/7", "subs": "map[podMonitor/knative-serving/controller/0:{}]"}
2023-10-24T09:17:00.414Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/8", "subs": "map[serviceMonitor/kube-system/metrics-server/0:{}]"}
2023-10-24T09:17:00.414Z    debug   discovery/manager.go:289    Starting provider   {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "provider": "http/9", "subs": "map[serviceMonitor/kube-system/coredns-coredns/0:{}]"}
2023-10-24T09:17:00.414Z    warn    internal@v0.87.0/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-10-24T09:17:00.414Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #3] Server created   {"grpc_log": true}
2023-10-24T09:17:00.414Z    info    otlpreceiver@v0.87.0/otlp.go:83 Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
2023-10-24T09:17:00.415Z    warn    internal@v0.87.0/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-10-24T09:17:00.415Z    info    otlpreceiver@v0.87.0/otlp.go:101    Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
2023-10-24T09:17:00.415Z    warn    internal@v0.87.0/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "receiver", "name": "zipkin", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-10-24T09:17:00.415Z    info    healthcheck/handler.go:132  Health Check state change   {"kind": "extension", "name": "health_check", "status": "ready"}
2023-10-24T09:17:00.415Z    info    service@v0.87.0/service.go:169  Everything is ready. Begin running and processing data.
2023-10-24T09:17:00.420Z    info    prometheusreceiver@v0.87.0/metrics_receiver.go:281  Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:17:00.420Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #3 ListenSocket #4] ListenSocket created {"grpc_log": true}
2023-10-24T09:17:00.436Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY   {"grpc_log": true}
2023-10-24T09:17:00.436Z    info    zapgrpc/zapgrpc.go:178  [core] [pick-first-lb 0x4002b45680] Received SubConn state update: 0x4002b45830, {ConnectivityState:READY ConnectionError:<nil>}    {"grpc_log": true}
2023-10-24T09:17:00.436Z    info    zapgrpc/zapgrpc.go:178  [core] [Channel #1] Channel Connectivity change to READY    {"grpc_log": true}
2023-10-24T09:17:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:18:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:18:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:19:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:19:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:20:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:20:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:21:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:21:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:22:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:22:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:23:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:23:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:24:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:24:30.420Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:25:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:25:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:26:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:26:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:27:00.420Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:27:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:28:00.420Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:28:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:29:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:29:30.420Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:30:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:30:30.420Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:31:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:31:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:32:00.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-10-24T09:32:30.421Z    debug   prometheusreceiver@v0.87.0/metrics_receiver.go:135  Syncing target allocator jobs   {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}

otel operator logs

{"level":"info","ts":"2023-10-24T09:14:02Z","msg":"Starting the OpenTelemetry Operator","opentelemetry-operator":"0.87.0","opentelemetry-collector":"otel/opentelemetry-collector-contrib:0.87.0","opentelemetry-targetallocator":"ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:0.87.0","operator-opamp-bridge":"ghcr.io/open-telemetry/opentelemetry-operator/operator-opamp-bridge:0.87.0","auto-instrumentation-java":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.30.0","auto-instrumentation-nodejs":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.41.1","auto-instrumentation-python":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.41b0","auto-instrumentation-dotnet":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.0.2","auto-instrumentation-go":"ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.7.0-alpha","auto-instrumentation-apache-httpd":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.3","auto-instrumentation-nginx":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.3","feature-gates":"operator.autoinstrumentation.apache-httpd,operator.autoinstrumentation.dotnet,-operator.autoinstrumentation.go,operator.autoinstrumentation.java,-operator.autoinstrumentation.multi-instrumentation,-operator.autoinstrumentation.nginx,operator.autoinstrumentation.nodejs,operator.autoinstrumentation.python,operator.collector.rewritetargetallocator,-operator.observability.prometheus","build-date":"2023-10-18T15:22:14Z","go-version":"go1.21.3","go-arch":"arm64","go-os":"linux","labels-filter":[]}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"setup","msg":"the env var WATCH_NAMESPACE isn't set, watching all namespaces"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.builder","msg":"Registering a mutating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=OpenTelemetryCollector","path":"/mutate-opentelemetry-io-v1alpha1-opentelemetrycollector"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-opentelemetry-io-v1alpha1-opentelemetrycollector"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=OpenTelemetryCollector","path":"/validate-opentelemetry-io-v1alpha1-opentelemetrycollector"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-opentelemetry-io-v1alpha1-opentelemetrycollector"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.builder","msg":"Registering a mutating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=Instrumentation","path":"/mutate-opentelemetry-io-v1alpha1-instrumentation"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-opentelemetry-io-v1alpha1-instrumentation"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=Instrumentation","path":"/validate-opentelemetry-io-v1alpha1-instrumentation"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-opentelemetry-io-v1alpha1-instrumentation"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-v1-pod"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"setup","msg":"starting manager"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":"0.0.0.0:8080","secure":false}
{"level":"info","ts":"2023-10-24T09:14:02Z","msg":"starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Starting webhook server"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443}
{"level":"info","ts":"2023-10-24T09:14:02Z","logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
I1024 09:14:02.397371       1 leaderelection.go:250] attempting to acquire leader lease opentelemetry-operator/9f7554c3.opentelemetry.io...
2023/10/24 09:14:19 http: TLS handshake error from 10.0.43.10:43132: EOF
I1024 09:16:37.473688       1 leaderelection.go:260] successfully acquired lease opentelemetry-operator/9f7554c3.opentelemetry.io
{"level":"info","ts":"2023-10-24T09:16:37Z","logger":"instrumentation-upgrade","msg":"looking for managed Instrumentation instances to upgrade"}
{"level":"info","ts":"2023-10-24T09:16:37Z","logger":"collector-upgrade","msg":"looking for managed instances to upgrade"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1alpha1.OpenTelemetryCollector"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.ConfigMap"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.ServiceAccount"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Service"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Deployment"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.DaemonSet"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.StatefulSet"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v2.HorizontalPodAutoscaler"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.PodDisruptionBudget"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting Controller","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector"}
{"level":"info","ts":"2023-10-24T09:16:37Z","logger":"instrumentation-upgrade","msg":"no instances to upgrade"}
{"level":"info","ts":"2023-10-24T09:16:37Z","msg":"Starting workers","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","worker count":1}
{"level":"info","ts":"2023-10-24T09:16:37Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel2","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:38Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel2","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:39Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:40Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:40Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:40Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:44Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:45Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:45Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:45Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:45Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:45Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:49Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:49Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:49Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:49Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:50Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:50Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel2","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:53Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel2","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:54Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:16:59Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:00Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"error","ts":"2023-10-24T09:17:00Z","logger":"controllers.OpenTelemetryCollector","msg":"failed to configure desired","opentelemetrycollector":{"name":"otel","namespace":"opentelemetry-collector"},"object_name":"otel-collector","object_kind":"&TypeMeta{Kind:,APIVersion:,}","error":"Operation cannot be fulfilled on statefulsets.apps \"otel-collector\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/open-telemetry/opentelemetry-operator/controllers.reconcileDesiredObjects\n\t/workspace/controllers/common.go:89\ngithub.com/open-telemetry/opentelemetry-operator/controllers.(*OpenTelemetryCollectorReconciler).Reconcile\n\t/workspace/controllers/opentelemetrycollector_controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:227"}
{"level":"error","ts":"2023-10-24T09:17:00Z","msg":"Reconciler error","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","OpenTelemetryCollector":{"name":"otel","namespace":"opentelemetry-collector"},"namespace":"opentelemetry-collector","name":"otel","reconcileID":"c0a1bbb4-8d58-462a-abe0-91d6b8ae4319","error":"failed to create objects for otel: Operation cannot be fulfilled on statefulsets.apps \"otel-collector\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":"2023-10-24T09:17:00Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:17:01Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:04Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:05Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:17:05Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:05Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:05Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}
{"level":"info","ts":"2023-10-24T09:17:05Z","logger":"controllers.OpenTelemetryCollector","msg":"skipping upgrade for OpenTelemetry Collector instance","name":"otel","namespace":"opentelemetry-collector"}
{"level":"info","ts":"2023-10-24T09:17:09Z","logger":"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","error":"missing port in address"}
{"level":"info","ts":"2023-10-24T09:17:09Z","logger":"controllers.OpenTelemetryCollector","msg":"no upgrade routines are needed for the OpenTelemetry instance","name":"otel","namespace":"opentelemetry-collector","version":"0.87.0","latest":"0.61.0"}


### Additional context

_No response_
jaronoff97 commented 11 months ago

Hmm... given you are seeing the scrape configs populate, I think this is indeed a bug. @swiatekm-sumo would you have time to take a look at this?

jaronoff97 commented 11 months ago

actually, this is the same issue as https://github.com/open-telemetry/opentelemetry-operator/issues/2262. I'm going to close this in favor of that one.