I've been trying to run a kubectl command via CommandRunner and it appears the AWS credentials I'm calling aren't being recognized:
The script updates the ec2, uninstalls cli, reinstalls with updated version, installs kubectl, export the AWS keys, configures aws eks, then runs kubectl.
First I tried just exporting the aws keys:
With both of these I get the same error:
error: error validating "/home/ec2-user/cluster-autoscaler.yaml": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
If I remote into the ec2, run the export of the aws keys + aws eks + kubectl command, everything works fine.
Neither of these work. Anyone have the same issue?
I've been trying to run a kubectl command via CommandRunner and it appears the AWS credentials I'm calling aren't being recognized:
The script updates the ec2, uninstalls cli, reinstalls with updated version, installs kubectl, export the AWS keys, configures aws eks, then runs kubectl. First I tried just exporting the aws keys:
I then tried creating the .aws/credentials and .aws/config files: "sudo yum update -y && sudo yum remove awscli -y && curl 'https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip' -o 'awscliv2.zip' && unzip awscliv2.zip && sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update && curl -o /home/ec2-user/kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.29.6/2024-07-12/bin/linux/amd64/kubectl && sudo chmod +x ./home/ec2-user/kubectl && mkdir -p /home/ec2-user/bin && cp ./home/ec2-user/kubectl /home/ec2-user/bin/kubectl && echo 'copied kubectl to home/ec2-user/bin' && cp ./home/ec2-user/kubectl /usr/local/bin/kubectl && echo 'copied kubectl to usr/local/bin' && aws s3 cp s3://xxxxx/yamls/cluster-autoscaler.yaml /home/ec2-user/cluster-autoscaler.yaml && echo 'downloaded yaml' && mkdir -m 777 /home/ec2-user/.aws/ && printf '[default]\naws_access_key_id=XXX\naws_secret_access_key=XXX\naws_session_token=XXX' >/home/ec2-user/.aws/credentials && printf '[default]\nregion=us-east-1\noutput=json' > /home/ec2-user/.aws/config && aws eks --region us-east-2 update-kubeconfig --name test-eks-cluster && kubectl apply -f /home/ec2-user/cluster-autoscaler.yaml && echo success > /command-output.txt"
With both of these I get the same error: error: error validating "/home/ec2-user/cluster-autoscaler.yaml": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
If I remote into the ec2, run the export of the aws keys + aws eks + kubectl command, everything works fine.
Neither of these work. Anyone have the same issue?
used both config and credentials