jtblin / kube2iam

kube2iam provides different AWS IAM roles for pods running on Kubernetes
BSD 3-Clause "New" or "Revised" License
1.99k stars 318 forks source link

Openshift 4.5 #276

Open jkassis opened 4 years ago

jkassis commented 4 years ago

I'm running this on Openshift 4.5 and am launching my pods with http_proxy env variable set as per the README.md. The remoteAddr is coming up wrong, which is what kube2iam uses to get the role mapping. Looks like Openshift 4.5 is setting the addr to something other than the pod ip.

image

PodIP and role is registered...

time="2020-08-08T19:13:39Z" level=debug msg="Pod OnAdd" pod.iam.role=k8s-eddie pod.name=eddie-649667ffc6-tvsgq pod.namespace=fg pod.status.ip=10.129.2.5 pod.status.phase=Running

Server is confused during request...

time="2020-08-08T19:13:39Z" level=info msg="Listening on port 8181"
time="2020-08-08T19:13:40Z" level=debug msg="Pod OnUpdate" pod.iam.role= pod.name=kube2iam-nx2fj pod.namespace=kube2iam pod.status.ip=10.0.130.59 pod.status.phase=Running
time="2020-08-08T19:16:06Z" level=debug msg="Proxy ec2 metadata request" metadata.url=169.254.169.254 req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2
time="2020-08-08T19:16:06Z" level=info msg="PUT /latest/api/token (403) took 1.191977 ms" req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2 res.duration=1.191977 res.status=403
time="2020-08-08T19:16:06Z" level=debug msg="Proxy ec2 metadata request" metadata.url=169.254.169.254 req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2
time="2020-08-08T19:16:06Z" level=info msg="PUT /latest/api/token (403) took 21.430212 ms" req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2 res.duration=21.430212 res.status=403
time="2020-08-08T19:16:06Z" level=debug msg="Proxy ec2 metadata request" metadata.url=169.254.169.254 req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2
time="2020-08-08T19:16:06Z" level=info msg="PUT /latest/api/token (403) took 0.893003 ms" req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2 res.duration=0.893003 res.status=403
time="2020-08-08T19:16:06Z" level=debug msg="Proxy ec2 metadata request" metadata.url=169.254.169.254 req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2
time="2020-08-08T19:16:06Z" level=info msg="PUT /latest/api/token (403) took 4.869249 ms" req.method=PUT req.path=/latest/api/token req.remote=169.254.33.2 res.duration=4.869249 res.status=403
time="2020-08-08T19:16:08Z" level=info msg="GET /latest/meta-data/iam/security-credentials/ (404) took 2211.651875 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/ req.remote=169.254.33.2 res.duration=2211.651875 res.status=404
time="2020-08-08T19:16:11Z" level=info msg="GET /latest/meta-data/iam/security-credentials/ (404) took 2289.899705 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/ req.remote=169.254.33.2 res.duration=2289.899705 res.status=404
time="2020-08-08T19:16:13Z" level=info msg="GET /latest/meta-data/iam/security-credentials/ (404) took 2476.947594 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/ req.remote=169.254.33.2 res.duration=2476.9475939999998 res.status=404
time="2020-08-08T19:16:16Z" level=info msg="GET /latest/meta-data/iam/security-credentials/ (404) took 2809.587032 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/ req.remote=169.254.33.2 res.duration=2809.587032 res.status=404
time="2020-08-08T19:20:01Z" level=info msg="GET /debug/store (200) took 0.356185 ms" req.method=GET req.path=/debug/store req.remote= res.duration=0.35618500000000003 res.status=200

Client reports this error...

Could not get object from s3: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, .
EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make EC2Metadata request
    status code: 404, request id: 
caused by: pod with specificed IP not found
jkassis commented 4 years ago

reinstalling the cluster with calico as the network layer fixed this.