cnoe-io / idpbuilder

Spin up a complete internal developer platform with only Docker required as a dependency.
https://cloud-native.slack.com/archives/C05TN9WFN5S
Apache License 2.0
165 stars 51 forks source link

Feature: Support cloud credentials transparently locally using kind #245

Open csantanapr opened 3 months ago

csantanapr commented 3 months ago

Have you searched for this feature request?

Problem Statement

when using idpbuilder with kind on my local machine or during testing in github actions I would like my IDP packages to be the same as if it's running in the cloud.

For example in aws when running on EKS pods can access aws credentials transparently by using the ec2 instance profile IAM by accessing the metadata api or they can use IRSA or the new Pod identity. This allows pods to not depend on hardcoded credentials using access key in a k8s secret. Some examples of IDP packages that leverage aws access is Crossplane, ACK, ArgoCD, Backstage (aws plugins), users apps (ie access s3 or rds db), terraform controller, Argo events

It would be idea that the yaml files used in idobuilder for example crossplane providerConfig is identical in kind idobuilder and when running xrossomane on EKS, and no need for user to create a yam with hardcoded credentials and have two different providerconfig

Searching online I found this article that mentions the aws metadata service that can be use locally like kind https://medium.com/@slimm609/aws-instance-profile-for-local-development-f144b0a7b8b9 Mock service on github https://github.com/aws/amazon-ec2-metadata-mock

Possible Solution

What I would like is an experience that idpbuilder when creating kind mounts my $HOME/.aws directory into the kind host then run the metadata service inside kind as pod that can assume a role or just a user creds hopefully the token can be refreshed

Alternatives Considered

No response

greghaynes commented 3 months ago

I would love this @csantanapr ! Internally, we directly change configs of our various controllers to reference secrets for CI/Local vs relying on baked in IRSA config in our deployments. I would much prefer to have this all follow the same code path with something like this!

nimakaviani commented 3 months ago

cool idea! +1 from me.

ealtili commented 3 months ago

Hi,

I think integrating crossplane with github then declaring desired state to repo avoid having copying multiple yaml files. So This repo can be used to configure crossplane AWS IAM provider which allows providing oid, saml, role and policy.

in the first run repo can be initiated by crossplane where initial secrets needs to be stored at github secrets. Then argo can monitor repo read secret and put it to vault plugin or (eso and sealed secret) to configure aws or any cloud provider. Once this is done we can either use patch and transform or argo workflow to make sure to delete initial secrets that are used.

As you know AWS provider for crossplane supports irsa. And we can have access to aws resources via federation or iam roles anywhere.

Ideally we should have Keycloak or similiar (https://github.com/Azure/azure-workload-identity) running so it is local identity provider for the resources including pods and has ability for workload identity federation with any cloud provider.

Another approach is Provide IAM credentials to containers running inside a kubernetes cluster based on annotations. Kube2iam provides different AWS IAM roles for pods running on Kubernetes. https://github.com/jtblin/kube2iam

Here usage of https://github.com/jtblin/aws-mock-metadata