I have several jobs running and the logs which were earlier getting pushed to cloudwatch insights. I was able to access them by job name.
However, recently, I only see cloudwatch and fluentd pod logs without actual logs that I need for my jobs.
In some cases, I get this error on fluentd:
unable to get http response from http://169.254.170.2/v2/metadata, error: unable to get response from http://169.254.170.2/v2/metadata, error: Get \"http://169.254.170.2/v2/metadata\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n
And in others I get this on cloudwatch agent:
access ECS task metadata fail with response unable to get response from http://169.254.170.2/v2/metadata, error: Get \"http://169.254.170.2/v2/metadata\": context deadline exceeded (Client.Timeout exceeded while awaiting headers), assuming I'm not running in ECS.\nNo csm configuration found.\nNo metric configuration found.\nConfiguration validation first phase succeeded\
I am new to this and I do not fully understand how things work. Can someone please help me out in understanding what must be going wrong? Is this a version upgrade issue or something?
I have several jobs running and the logs which were earlier getting pushed to cloudwatch insights. I was able to access them by job name. However, recently, I only see cloudwatch and fluentd pod logs without actual logs that I need for my jobs.
In some cases, I get this error on fluentd:
unable to get http response from http://169.254.170.2/v2/metadata, error: unable to get response from http://169.254.170.2/v2/metadata, error: Get \"http://169.254.170.2/v2/metadata\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n
And in others I get this on cloudwatch agent:
I am new to this and I do not fully understand how things work. Can someone please help me out in understanding what must be going wrong? Is this a version upgrade issue or something?
Thanks!