hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.85k stars 9.19k forks source link

Userdata scripts in EC2 do not get updated correctly #10696

Open ghost opened 5 years ago

ghost commented 5 years ago

This issue was originally opened by @jwlogemann as hashicorp/terraform#23246. It was migrated here as a result of the provider split. The original body of the issue is below.


Hi, I've made some changes to my user-data script, which do not get applied by terraform to the EC2 instances. It appears to cache some old version of the script, and keeps applying that, instead of looking for local changes.

Terraform Version

Terraform v0.12.12

Terraform Configuration Files

ec2.tf: resource "aws_instance" "ec2_instance" { ... user_data = templatefile("${path.module}/../shared-templates/user-data.sh", { log_group = aws_cloudwatch_log_group.ec2_log_group.name additional_user_data = var.additional_user_data }) ... } first part of the user-data script:

!/bin/bash

Save script output

set -x exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1 echo BEGIN date '+%Y-%m-%d %H:%M:%S'

OS_NAME=$(cat /etc/os-release | grep ^NAME | cut -d '"' -f2 | cut -d ' ' -f1) INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) LOG_GROUP="${log_group}" REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep '\"region\"' | cut -d\" -f4)

Relevant part of tfstate: { "module": "module.b2b_db2", "mode": "data", "type": "template_file", "name": "ec2_userdata", "provider": "provider.template", "instances": [ { "schema_version": 0, "attributes": { "filename": null, "id": "353e47d81936964543a43f677ac73701fec92eca5ff52074599ce675270b685e", "rendered": "#!/bin/sh\n# Stream instance logs to CloudWatch Logs\nset -x\nOS_NAME=$(cat /etc/os-release|grep ^NAME|cut -d '\"' -f2|cut -d ' ' -f1)\ngrep '/var/log/cfn-hup.log' /etc/awslogs/awslogs.conf\nif [ $? -ne 0 ]; then\n INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\n LOG_GROUP=\"/ec2/b2b-DB2-instance-log-group\"\n REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep '\\"region\\"' | cut -d\\" -f4)\n # install the awslogs package\n if [$OS_NAME == \"Redhat\"] || [ $OS_NAME == \"CentOS\" ] || [ $OS_NAME == \"Amazon\" ];then\n yum install -y aws-cli awslogs\n elif [ $OS_NAME == \"Ubuntu\"]; then\n apt-get update \u0026\u0026 apt-get install awscli awslogs\n else\n echo \"unsupported OS\"\n fi\nfi\n\n# update awscli.conf with regions where logs to be sent\ngrep 'region = ' /etc/awslogs/awscli.conf\nif [ $? -ne 0 ]; then\n echo \"region = ${REGION}\" \u003e\u003e /etc/awslogs/awscli.conf\n else\n sed -i \"s/region = ./region = ${REGION}/g\" /etc/awslogs/awscli.conf\nfi\n\n# adding other log files\n\nfor log in $(find /var -iname \*.log -o -name messages|tr '\n' ' ');\ndo\n echo -e \"\n[${log}]\\n \nfile = ${log}\\n \nlog_group_name = ${LOG_GROUP}\\n \nlog_stream_name = ${INSTANCEID}${log}\\n \ninitial_position = start_of_file\\n \ndatetime_format = %b %d %H:%M:%S\\n \nbuffer_duration = 5000\" \u003e\u003e /etc/awslogs/awslogs.conf\ndone\n\n# enable awslogd service\nsystemctl enable awslogsd\n# restart awslogs service\nsystemctl restart awslogsd\n# enable awslogs service to start on system boot\nchkconfig awslogsd on\n# Additional user data\necho \"Running additional user data - setting up DB2 instance.\"\n\necho \"Mounting EFS data volume\"\nmkdir /data\nmount -t efs -o tls fs-94c4ddcd:/ /data\n", "template": "#!/bin/sh\n# Stream instance logs to CloudWatch Logs\nset -x\nOS_NAME=$(cat /etc/os-release|grep ^NAME|cut -d '\"' -f2|cut -d ' ' -f1)\ngrep '/var/log/cfn-hup.log' /etc/awslogs/awslogs.conf\nif [ $? -ne 0 ]; then\n INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\n LOG_GROUP=\"${log_group}\"\n REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep '\\"region\\"' | cut -d\\" -f4)\n # install the awslogs package\n if [$OS_NAME == \"Redhat\"] || [ $OS_NAME == \"CentOS\" ] || [ $OS_NAME == \"Amazon\" ];then\n yum install -y aws-cli awslogs\n elif [ $OS_NAME == \"Ubuntu\"]; then\n apt-get update \u0026\u0026 apt-get install awscli awslogs\n else\n echo \"unsupported OS\"\n fi\nfi\n\n# update awscli.conf with regions where logs to be sent\ngrep 'region = ' /etc/awslogs/awscli.conf\nif [ $? -ne 0 ]; then\n echo \"region = $${REGION}\" \u003e\u003e /etc/awslogs/awscli.conf\n else\n sed -i \"s/region = ./region = $${REGION}/g\" /etc/awslogs/awscli.conf\nfi\n\n# adding other log files\n\nfor log in $(find /var -iname \*.log -o -name messages|tr '\n' ' ');\ndo\n echo -e \"\n[$${log}]\\n \nfile = $${log}\\n \nlog_group_name = $${LOG_GROUP}\\n \nlog_stream_name = $${INSTANCEID}$${log}\\n \ninitial_position = start_of_file\\n \ndatetime_format = %b %d %H:%M:%S\\n \nbuffer_duration = 5000\" \u003e\u003e /etc/awslogs/awslogs.conf\ndone\n\n# enable awslogd service\nsystemctl enable awslogsd\n# restart awslogs service\nsystemctl restart awslogsd\n# enable awslogs service to start on system boot\nchkconfig awslogsd on\n# Additional user data\n${additional_user_data}\n", "vars": { "additional_user_data": "echo \"Running additional user data - setting up DB2 instance.\"\n\necho \"Mounting EFS data volume\"\nmkdir /data\nmount -t efs -o tls fs-94c4ddcd:/ /data", "log_group": "/ec2/b2b-DB2-instance-log-group" } },

Note that the userdata script is completely different. The local one starts with #!/bin/bash, and the tfstate one with #!/bin/sh for instance

Debug Output

https://drive.google.com/file/d/1IRYu14QYnKhjlPW5S2G4JDDM9MzaEeV_/view?usp=sharing

Expected Behavior

New user-data should have been applied

Actual Behavior

Terraform did not change anything

Steps to Reproduce

  1. terraform init
  2. terraform apply
github-actions[bot] commented 3 years ago

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

justinretzolk commented 2 years ago

Hey @jwlogemann 👋 Thank you for taking the time to file this issue! Given that there's been a number of AWS provider releases since you initially filed it, can you confirm whether you're still experiencing this behavior?

bas-kirill commented 8 months ago

Have the same error