kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.92k stars 4.65k forks source link

Unable to access aws vpcs using assume role #6535

Closed haroonniazi closed 5 years ago

haroonniazi commented 5 years ago

Kops Version: Version 1.11.0 (git-2c2042465) kubectl Version: Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:39:52Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Cloud Provider: AWS

I am trying to execute kops update cluster --yes in a docker container which is running on ECS in an aws account 'A' with a task role policy having full allow permissions on ec2, route53, iam and autoscaling. It also has permissions to assume role in another aws account 'B' where the k8s cluster is running.

The profile aws profile on my container is configured as follows:

[profile someprofile] role_arn=arn:aws:iam::xxxxxxxxxxxx:role/some-role credential_source=EcsContainer output = json region = eu-west-1

The environment variables are set as follows:

AWS_DEFAULT_PROFILE="someprofile" AWS_PROFILE="someprofile" AWS_SDK_LOAD_CONFIG="1"

The role which I am trying to assume in account 'B' also has a trust relationship with the ecs role in account 'A'.

When I execute aws ec2 describe-vpcs from my docker container running on a ECS cluster in account A, this returns the desired output and finds the correct vpc of Account 'B':

{ "Vpcs": [ { "VpcId": "vpc-0eb974d11027d936b", "InstanceTenancy": "default", "Tags": [ { "Value": "someaccount", "Key": "Name" }, { "Value": "shared", "Key": "kubernetes.io/cluster/k8s.somecluster.somedomain.com" } ], "CidrBlockAssociationSet": [ { "AssociationId": "vpc-cidr-assoc-0cc604f1d8feacf63", "CidrBlock": "10.150.32.0/19", "CidrBlockState": { "State": "associated" } } ], "Ipv6CidrBlockAssociationSet": [ { "Ipv6CidrBlock": "2a05:d018:e8:6000::/56", "AssociationId": "vpc-cidr-assoc-01742765b3333f172", "Ipv6CidrBlockState": { "State": "associated" } } ], "State": "available", "DhcpOptionsId": "dopt-029b7349d660873fe", "OwnerId": "706283438095", "CidrBlock": "10.150.32.0/19", "IsDefault": false } ] }

but, when I execute `kops update cluster --yes', it returns this error listing VPCs: InvalidVpcID.NotFound: The vpc ID 'vpc-0eb974d11027d936b' does not exist. for the same vpcid which was found in the earlier describe-vpcs command.

The kops manifest gist link is available at https://gist.github.com/haroonniazi/85c9d48a6cec569396897d7dfebd731a

Any help would be highly appreciated.

niroliyanage commented 5 years ago

hey @haroonniazi did you find a solution for this ?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kops/issues/6535#issuecomment-529065924): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.