Open AllanOricil opened 5 months ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
It is also not iterating over all log groups I have in my aws account. It iterates over and over again in log groups that dont even have log streams anymore, and it also never tries other log groups than these ones shown below.
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/w4r1tjdtbj/v1:*\",\n CreationTime: 1704593418144,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/w4r1tjdtbj/v1\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/apigateway/w4r1tjdtbj/v1\",\n MetricFilterCount: 0,\n StoredBytes: 33873236\n}"}
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/welcome:*\",\n CreationTime: 1686097976313,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/welcome\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/apigateway/welcome\",\n MetricFilterCount: 0,\n StoredBytes: 110\n}"}
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/DockerBuild:*\",\n CreationTime: 1690469165284,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/DockerBuild\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/DockerBuild\",\n MetricFilterCount: 0,\n StoredBytes: 61572\n}"}
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn:*\",\n CreationTime: 1711131852056,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn\",\n MetricFilterCount: 0,\n StoredBytes: 2808\n}"}
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC:*\",\n CreationTime: 1705729018015,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC\",\n MetricFilterCount: 0,\n StoredBytes: 13570\n}"}
2024-04-09T17:01:47.395Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ:*\",\n CreationTime: 1705730226062,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ\",\n MetricFilterCount: 0,\n StoredBytes: 43284\n}"}
2024-04-09T17:01:47.396Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3:*\",\n CreationTime: 1705729172049,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3\",\n MetricFilterCount: 0,\n StoredBytes: 367152\n}"}
2024-04-09T17:01:47.396Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyPipeline-selfupdate:*\",\n CreationTime: 1690237246405,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyPipeline-selfupdate\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyPipeline-selfupdate\",\n MetricFilterCount: 0,\n StoredBytes: 17016\n}"}
2024-04-09T17:01:47.396Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo:*\",\n CreationTime: 1686462158018,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n MetricFilterCount: 0,\n StoredBytes: 11720\n}"}
2024-04-09T17:01:47.396Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-8K3Qo5kQ14Sd:*\",\n CreationTime: 1686767847677,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-8K3Qo5kQ14Sd\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyProject39F7B0AE-8K3Qo5kQ14Sd\",\n MetricFilterCount: 0,\n StoredBytes: 2253\n}"}
2024-04-09T17:01:56.753Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo:*\",\n CreationTime: 1686462158018,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n MetricFilterCount: 0,\n StoredBytes: 11720\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/w4r1tjdtbj/v1:*\",\n CreationTime: 1704593418144,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/w4r1tjdtbj/v1\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/apigateway/w4r1tjdtbj/v1\",\n MetricFilterCount: 0,\n StoredBytes: 33873236\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/welcome:*\",\n CreationTime: 1686097976313,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/apigateway/welcome\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/apigateway/welcome\",\n MetricFilterCount: 0,\n StoredBytes: 110\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/DockerBuild:*\",\n CreationTime: 1690469165284,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/DockerBuild\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/DockerBuild\",\n MetricFilterCount: 0,\n StoredBytes: 61572\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn:*\",\n CreationTime: 1711131852056,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/LLEProjectE0AA5ECD-AICRGHcCk5hn\",\n MetricFilterCount: 0,\n StoredBytes: 2808\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC:*\",\n CreationTime: 1705729018015,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelineLintPipelin-ZfBXxNdpfAzC\",\n MetricFilterCount: 0,\n StoredBytes: 13570\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ:*\",\n CreationTime: 1705730226062,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelinePublishPipe-kmL0P9FPfuzQ\",\n MetricFilterCount: 0,\n StoredBytes: 43284\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3:*\",\n CreationTime: 1705729172049,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyBucketPipelineTestPipelin-UHa4TfoMAlV3\",\n MetricFilterCount: 0,\n StoredBytes: 367152\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyPipeline-selfupdate:*\",\n CreationTime: 1690237246405,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyPipeline-selfupdate\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyPipeline-selfupdate\",\n MetricFilterCount: 0,\n StoredBytes: 17016\n}"}
2024-04-09T17:02:08.930Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch/2", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo:*\",\n CreationTime: 1686462158018,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/codebuild/MyProject39F7B0AE-1WinSXoew9Qo\",\n MetricFilterCount: 0,\n StoredBytes: 11720\n}"}
@djaglowski @schmikei could you guys help me to get my dev env so that I can fix this?
I think it might be related, if you'd be willing to try out the PR feel free to see if it resolves your issue, we'd very much appreciate it!
@schmikei how can I do it? Im using the otel-collector binary
@schmikei how can I do it? Im using the otel-collector binary
You can checkout the fork and do a make otelcontribcol
to build a binary!
@schmikei can I build linux dis on a mac?
never mind @schmikei I asked that because my t2.micro died while building it. I created a t3.medium, and I'm now trying to build it again. Let's hope it is enought.
@schmikei do you have this error while building? I checkout your branch, and then I run the make otelcontribcol
@schmikei I spent the day building this stuff but it did not work :/
I used the same config.yaml
with the binary I created from your branch and they had both different results.
Below is the output when using 0.97.0
2024-04-11T04:59:12.112Z info prometheusreceiver@v0.97.0/metrics_receiver.go:299 Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-04-11T04:59:21.971Z debug awscloudwatchreceiver@v0.97.0/logs.go:287 attempting to discover log groups. {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "limit": 100}
2024-04-11T04:59:22.003Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function:*\",\n CreationTime: 1701809123724,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/lambda/get-instances-api-function\",\n MetricFilterCount: 0,\n RetentionInDays: 7,\n StoredBytes: 137470\n}"}
2024-04-11T04:59:31.970Z debug awscloudwatchreceiver@v0.97.0/logs.go:287 attempting to discover log groups. {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "limit": 100}
2024-04-11T04:59:31.994Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function:*\",\n CreationTime: 1701809123724,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/lambda/get-instances-api-function\",\n MetricFilterCount: 0,\n RetentionInDays: 7,\n StoredBytes: 137470\n}"}
2024-04-11T04:59:41.971Z debug awscloudwatchreceiver@v0.97.0/logs.go:287 attempting to discover log groups. {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "limit": 100}
2024-04-11T04:59:41.997Z debug awscloudwatchreceiver@v0.97.0/logs.go:323 discovered log group {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "log group": "{\n Arn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function:*\",\n CreationTime: 1701809123724,\n LogGroupArn: \"arn:aws:logs:us-east-2:845044614340:log-group:/aws/lambda/get-instances-api-function\",\n LogGroupClass: \"STANDARD\",\n LogGroupName: \"/aws/lambda/get-instances-api-function\",\n MetricFilterCount: 0,\n RetentionInDays: 7,\n StoredBytes: 137470\n}"}
Below you can see the result I got from the new binary
2024-04-11T04:58:36.200Z debug awscloudwatchreceiver@v0.97.0/logs.go:287 attempting to discover log groups. {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "limit": 100}
2024-04-11T04:58:36.200Z error awscloudwatchreceiver@v0.97.0/logs.go:165 unable to perform discovery of log groups {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "error": "unable to list log groups: InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, DescribeLogGroupsInput.NextToken.\n"}
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/awscloudwatchreceiver.(*logsReceiver).startPolling
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/awscloudwatchreceiver@v0.97.0/logs.go:165
2024-04-11T04:58:46.201Z debug awscloudwatchreceiver@v0.97.0/logs.go:287 attempting to discover log groups. {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "limit": 100}
2024-04-11T04:58:46.201Z error awscloudwatchreceiver@v0.97.0/logs.go:165 unable to perform discovery of log groups {"kind": "receiver", "name": "awscloudwatch", "data_type": "logs", "error": "unable to list log groups: InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, DescribeLogGroupsInput.NextToken.\n"}
github.com/open-telemetry/opentelemetry-collector-contrib/receiver/awscloudwatchreceiver.(*logsReceiver).startPolling
@djaglowski @schmikei can you help here?
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
@schmikei do you intend to work on this?
@AllanOricil did you end up trying the latest release? I've validated that everything with the receiver is currently working as expected in my lab and I'm currently unable to replicate your specific behavior at the moment.
If you want to start narrowing down your config to target specific like in our examples: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/awscloudwatchreceiver#sample-configs
Apologies that I cannot be more helpful, if you do find something, always happy to review a PR 😄
I have a similar issue. When sending 2000 logs to a log stream, i only get about 80% of them processed by the receiver. I am using version 0.103.0 in docker. The system or the docker do not hit cpu, memory or other limits that i see.
My receiver config:
awscloudwatch:
region: eu-central-1
logs:
poll_interval: 1m
Thanks
Aah, i see this:
max_events_per_request | default=50
I am probably hitting that limit.
Added max_events_per_request=5000
in my config and will test again.
Ok, so i did some tests with AWS API Gateway. I send 1000 API requests, one request every 250 ms. It takes 6m 40s to send them using postman.
Of those 1000 requests 1000 are processed by AWS API Gateway. There are 1000 log lines containing "AWS Integration Endpoint RequestId" in Cloudwatch. The receiver only processes 792 lines containing the same text. I export the logs to Loki and i see no message fails there. (Monitored with prometheus)
I add here the timestamps of the Cloudwatch logs and the loki logs. Loki-logs.txt Cloudwatch-logs.txt
When you compare the two file you see about every minute (my poll interval) a gap in the logs. Hope this helps to find a solution.
Increasing or decreasing the poll interval does not help.
I'm starting to think Cloudwatch may take a second to serve the log entries on the API (just a suspicion), created a Spike PR in #33809 that will set the next end time...
I will try to dedicate some time to see if I can replicate the specific behavior, but if you'd like to test out the PR to see if it helps, it would help as I try and replicate your test case.
@schmikei once I have time I will test it again
I did an other test this time with only API Gateway log groups. That is 4 log groups out of the 38 that i have in total. But that does not make a difference.
The gaps in the logs that i see are between 14-18 seconds with a 1min pull request.
@schmikei I don't have the mad skills to do this. But if you can provide a docker, i can test. Can it be that the receiver does not generate metrics that are exported to Prometheus?
@schmikei There was something bugging me, namely that if it is an authentication issue i should have a better log retrieval % when using longer poll interval. That is also the case, but the gain is marginal, from around 79% with 1 min to 83% with 5 min poll.
Here is a graph of the logs i get. This is done with a poll of 5 min and you see a big gap every 5 min. When you look closely you also see small gaps every couple of seconds.
It turns out that when Cloudwatch created a new log stream the first logs (API Gateway call) is not picked up. Cloudwatch is creating a new log stream every 3-10 seconds.
A possible solution to fix this would be to work with a time offset equal to the poll rate, so if poll_interval: 1m
the time offset would also be 1m.
This could also solve the slow authentication issue, as the slow authentication will not be an issue as long as it is shorter then the poll interval. You will have to keep track of the last log timestamp you retrieved.
You can also make the offset optional so that user can choose to use it or not.
@AllanOricil @schmikei Any news on this? I see it is still marked by github as "Issues needing triage". Thanks.
I did not have time to test the latest changes. If you did and the issue is still there, make a video and post it here.
Heya @AllanOricil I am unable to test this as i do not know how to build this and then use it in a docker.
@schmikei Can you tell me what AWS API you use for the authentication? That way i can open a case with AWS in the hope they can see why the authentication takes so long (15+ sec). It may be a configuration issue.
@Jessimon I cant invest time on testing it again. Once I reach a point in my project where It makes sense to add probes to every single running service, I will try again.
Component(s)
receiver/awscloudwatch
What happened?
Description
1 - Past log streams aren't exported 2 - Log streams are incomplete
Steps to Reproduce
1 - create a lambda function that logs more than 15 log lines in Cloud Watch 2 - run this function several times and ensure a single log stream has more than 15 log lines 3 - in an EC2 machine, run otel-collector with this receiver
WARNING: don't forget to change
NAME_OF_YOUR_LAMBDA_FUNCTION_LOG_GROUP
by the name of the log group of your lambda functionDon't forget to register the receiver in the logs pipeline
4 - verify that logs are processed with no error. You should see something as shown below
Expected Result
Actual Result
Collector version
0.97.0
Environment information
Environment
NAME="Amazon Linux" VERSION="2023" ID="amzn" ID_LIKE="fedora" VERSION_ID="2023" PLATFORM_ID="platform:al2023" PRETTY_NAME="Amazon Linux 2023" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2023" HOME_URL="https://aws.amazon.com/linux/" BUG_REPORT_URL="https://github.com/amazonlinux/amazon-linux-2023" SUPPORT_END="2028-03-15"
OpenTelemetry Collector configuration
Log output
Additional context
Im using Signoz to see my logs
Compare the images to verify that there are missing log events for my log stream named
2024/04/09/[$LATEST]9a79cb34998a4037bf1e3ff5df35fea5
In Signoz, I filtered logs by the log stream name