open-telemetry / opentelemetry-collector-contrib

Contrib repository for the OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
2.72k stars 2.16k forks source link

[receiver/awscontainerinsight] Amazon Linux 2023 container instance does not have the expected cgroup path, and as a result, instance_cpu_reserved_capacity and instance_memory_reserved_capacity values are 0 #33697

Closed 0nihajim closed 1 week ago

0nihajim commented 1 week ago

Component(s)

receiver/awscontainerinsight

What happened?

Description

When the awscontainerinsightreceiver is run on an AL2023 ECS container instance, the following errors are logged frequently.

2024-06-21T06:02:46.395Z warn ecsInfo/cgroup.go:121 Failed to get cpu cgroup path for task: {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics", "error": "CGroup Path \"/cgroup/cpu/ecs/AL2023/dc454d1e84c0478fa8ed683523fe7e55\" does not exist"}
2024-06-21T06:02:46.395Z warn ecsInfo/cgroup.go:142 Failed to get memory cgroup path for task: %v {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics", "error": "CGroup Path \"/cgroup/memory/ecs/AL2023/dc454d1e84c0478fa8ed683523fe7e55\" does not exist"}
2024-06-21T06:02:46.300Z warn cadvisor/cadvisor_linux.go:211 Can't get mem or cpu reserved! {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics"}`

Steps to Reproduce

It can be reproduced by simply launching a container instance of the AL2023 ECS-optimized AMI and deploying the ADOT Collector as a daemon on the container instance with the Quick Setup in the following document.

https://docs.aws.amazon.com/ja_jp/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-OTEL.html

Expected Result

awscontainerinsightreceiver regularly acquires CPU and memory reserved for tasks from the cgroup path, which is /sys/fs/cgroup/memory/ecs// and /sys/fs/cgroup/cpu/ecs//. [1]

Then, the awscontainerinsightreceiver divides the result by the CPULimits and MemoryLimits of the instance acquired by cAdvisor. [2]

This calculation result shows the ratio of CPU and memory (instance_cpu_reserved_capacity, instance_memory_reserved_capacity) reserved by tasks on the container instance. [3]

Actual Result

The expected cgroup path does not exist in AL2023 and the values of cpuReserved and memReserved cannot be retrieved and are set to 0. As a result, the values of instance_memory_reserved_capacity, instance_cpu_reserved_capacity sent to CloudWatch metrics are always reported as 0.

Root Cause

AL2023 has switches to cgroup v2. [4] The directory structure under /sys/fs/cgroup is completely different from the directory structure in cgroup v1, so code changes are required.

reference

[1] https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/awscontainerinsightreceiver/internal/ecsInfo/cgroup.go

func (c *cgroupScanner) refresh() {

    if c.ecsTaskInfoProvider == nil {
        return
    }

    cpuReserved := int64(0)
    memReserved := int64(0)

    for _, task := range c.ecsTaskInfoProvider.getRunningTasksInfo() {
        taskID, err := getTaskCgroupPathFromARN(task.ARN)
        if err != nil {
            c.logger.Warn("Failed to get ecs taskid from arn: ", zap.Error(err))
            continue
        }
        // ignore the one only consume 2 shares which is the default value in cgroup
        if cr := c.getCPUReservedInTask(taskID, c.containerInstanceInfoProvider.GetClusterName()); cr > 2 {
            cpuReserved += cr
        }
        memReserved += c.getMEMReservedInTask(taskID, c.containerInstanceInfoProvider.GetClusterName(), task.Containers)
    }
    c.Lock()
    defer c.Unlock()
    c.memReserved = memReserved
    c.cpuReserved = cpuReserved
}
func (c *cgroupScanner) getCPUReservedInTask(taskID string, clusterName string) int64 {
    cpuPath, err := getCGroupPathForTask(c.mountPoint, "cpu", taskID, clusterName)
    if err != nil {
        c.logger.Warn("Failed to get cpu cgroup path for task: ", zap.Error(err))
        return int64(0)
    }

    // check if hard limit is configured
    if cfsQuota, err := readInt64(cpuPath, "cpu.cfs_quota_us"); err == nil && cfsQuota != -1 {
        if cfsPeriod, err := readInt64(cpuPath, "cpu.cfs_period_us"); err == nil && cfsPeriod > 0 {
            return int64(math.Ceil(float64(1024*cfsQuota) / float64(cfsPeriod)))
        }
    }

    if shares, err := readInt64(cpuPath, "cpu.shares"); err == nil {
        return shares
    }

    return int64(0)
}

func (c *cgroupScanner) getMEMReservedInTask(taskID string, clusterName string, containers []ECSContainer) int64 {
    memPath, err := getCGroupPathForTask(c.mountPoint, "memory", taskID, clusterName)
    if err != nil {
        c.logger.Warn("Failed to get memory cgroup path for task: %v", zap.Error(err))
        return int64(0)
    }

    if memReserved, err := readInt64(memPath, "memory.limit_in_bytes"); err == nil && memReserved != kernelMagicCodeNotSet {
        return memReserved
    }

    // sum the containers' memory if the task's memory limit is not configured
    sum := int64(0)
    for _, container := range containers {
        containerPath := filepath.Join(memPath, container.DockerID)

        // soft limit first
        if softLimit, err := readInt64(containerPath, "memory.soft_limit_in_bytes"); err == nil && softLimit != kernelMagicCodeNotSet {
            sum += softLimit
            continue
        }

        // try hard limit when soft limit is not configured
        if hardLimit, err := readInt64(containerPath, "memory.limit_in_bytes"); err == nil && hardLimit != kernelMagicCodeNotSet {
            sum += hardLimit
        }
    }
    return sum
}

[2] https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/awscontainerinsightreceiver/internal/cadvisor/cadvisor_linux.go#L199C1-L231C2

for _, cadvisormetric := range cadvisormetrics {
        if cadvisormetric.GetMetricType() == ci.TypeInstance {
            metricMap := cadvisormetric.GetFields()
            cpuReserved := c.ecsInfo.GetCPUReserved()
            memReserved := c.ecsInfo.GetMemReserved()
            if cpuReserved == 0 && memReserved == 0 {
                c.logger.Warn("Can't get mem or cpu reserved!")
            }
            cpuLimits, cpuExist := metricMap[ci.MetricName(ci.TypeInstance, ci.CPULimit)]
            memLimits, memExist := metricMap[ci.MetricName(ci.TypeInstance, ci.MemLimit)]

            if !cpuExist && !memExist {
                c.logger.Warn("Can't get mem or cpu limit")
            } else {
                // cgroup standard cpulimits should be cadvisor standard * 1.024
                metricMap[ci.MetricName(ci.TypeInstance, ci.CPUReservedCapacity)] = float64(cpuReserved) / (float64(cpuLimits.(int64)) * 1.024) * 100
                metricMap[ci.MetricName(ci.TypeInstance, ci.MemReservedCapacity)] = float64(memReserved) / float64(memLimits.(int64)) * 100
            }

[3] https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html

[4] https://docs.aws.amazon.com/linux/al2023/ug/cgroupv2.html

AL2 supports cgroupv1, and AL2023 supports cgroupv2. This is notable if running containerized workloads, such as when Using AL2023 based Amazon ECS AMIs to host containerized workloads.

Collector version

ADOT Collector version: v0.39.1

Environment information

Environment

OS: Amazon Linux 2023

OpenTelemetry Collector configuration

extensions:
  health_check:

receivers:
  awscontainerinsightreceiver:
    collection_interval: 10s
    container_orchestrator: ecs

processors:
  batch/metrics:
    timeout: 60s

exporters:
  awsemf:
    namespace: ECS/ContainerInsights
    log_group_name: '/aws/ecs/containerinsights/{ClusterName}/performance'
    log_stream_name: 'instanceTelemetry/{ContainerInstanceId}'
    resource_to_telemetry_conversion:
      enabled: true
    dimension_rollup_option: NoDimensionRollup
    parse_json_encoded_attr_values: [Sources]
    metric_declarations:
      # instance metrics
      - dimensions: [ [ ContainerInstanceId, InstanceId, ClusterName] ]
        metric_name_selectors:
          - instance_cpu_utilization
          - instance_memory_utilization
          - instance_network_total_bytes
          - instance_cpu_reserved_capacity
          - instance_memory_reserved_capacity
          - instance_number_of_running_tasks
          - instance_filesystem_utilization
      - dimensions: [ [ClusterName] ]
        metric_name_selectors:
          - instance_cpu_utilization
          - instance_memory_utilization
          - instance_network_total_bytes
          - instance_cpu_reserved_capacity
          - instance_memory_reserved_capacity
          - instance_number_of_running_tasks
          - instance_cpu_usage_total
          - instance_cpu_limit
          - instance_memory_working_set
          - instance_memory_limit
  logging:
    loglevel: debug
service:
  pipelines:
    metrics:
      receivers: [awscontainerinsightreceiver]
      processors: [batch/metrics]
      exporters: [awsemf,logging]
  extensions: [health_check]

Log output

2024-06-21T06:02:46.395Z warn ecsInfo/cgroup.go:121 Failed to get cpu cgroup path for task: {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics", "error": "CGroup Path \"/cgroup/cpu/ecs/AL2023/dc454d1e84c0478fa8ed683523fe7e55\" does not exist"}
2024-06-21T06:02:46.395Z warn ecsInfo/cgroup.go:142 Failed to get memory cgroup path for task: %v {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics", "error": "CGroup Path \"/cgroup/memory/ecs/AL2023/dc454d1e84c0478fa8ed683523fe7e55\" does not exist"}
2024-06-21T06:02:46.300Z warn cadvisor/cadvisor_linux.go:211 Can't get mem or cpu reserved! {"kind": "receiver", "name": "awscontainerinsightreceiver", "data_type": "metrics"}`


### Additional context

_No response_
github-actions[bot] commented 1 week ago

Pinging code owners:

0nihajim commented 1 week ago

I will close this issue and create a new Feature Request issue for supporting AL 2023. #33716