lyft / metadataproxy

A proxy for AWS's metadata service that gives out scoped IAM credentials from STS
Other
456 stars 69 forks source link

metadataproxy not returning IAM role credentials to containers #33

Closed chespix closed 7 years ago

chespix commented 7 years ago

Hi. We have metadataproxy running as a rancher stack. We have setup the firewall rules and we can see our request to 169.254.169.254 are being sent to metadataproxy container, but only the pass-thru proxy seems to work. Anytime we try to get info from the IAM end point, we don't get output at all, or we get a 404.

Is there anyway to enable a debug output in metadataproxy to try and find out whats going on?

root@ba95a0341b81:/aws# curl http://169.254.169.254/latest/meta-data/mac
0a:b9:20:62:36:3c

root@ba95a0341b81:/aws# curl http://169.254.169.254/latest/meta-data/iam
info

root@ba95a0341b81:/aws# curl http://169.254.169.254/latest/meta-data/iam/info
#(No output)

root@ba95a0341b81:/aws# curl http://169.254.169.254/iam/security-credentials/ran
cher-dev_rancher_machine
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
         "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
  <title>404 - Not Found</title>
 </head>
 <body>
  <h1>404 - Not Found</h1>
 </body>
</html>

More detailed curl output, where we see metadataproxy is taking the request:

root@ba95a0341b81:/aws# curl -vvvv http://169.254.169.254/iam/security-credentia
ls/read-s3-db-backups
* Hostname was NOT found in DNS cache
*   Trying 169.254.169.254...
* Connected to 169.254.169.254 (169.254.169.254) port 80 (#0)
> GET /iam/security-credentials/read-s3-db-backups HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 169.254.169.254
> Accept: */*
>
< HTTP/1.1 200 OK
* Server gunicorn/19.3.0 is not blacklisted
< Server: gunicorn/19.3.0
< Date: Mon, 06 Feb 2017 17:43:30 GMT
< Connection: keep-alive
< Transfer-Encoding: chunked
< Content-Type: text/html
<
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
         "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
  <title>404 - Not Found</title>
 </head>
 <body>
  <h1>404 - Not Found</h1>
 </body>
</html>
* Connection #0 to host 169.254.169.254 left intact

Metadataproxy docker-compose.yml :

version: '2'
services:
  metadataproxy:
    image: pythiant9shared/metadataproxy:latest
    stdin_open: true
    network_mode: host
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    tty: true
    labels:
      io.rancher.container.pull_image: always
      io.rancher.scheduler.global: 'true'

Application docker-compose.yml :

version: '2'
services:
  test-db-tasks:
    image: pythiant9shared/rds-db-tasks:latest
    environment:
      IAM_ROLE: read-s3-db-backups
    stdin_open: true
    labels:
      io.rancher.container.pull_image: always
      io.rancher.container.start_once: 'true'

thanks for your help!

ryan-lane commented 7 years ago

Can you show me your metadataproxy configuration, with anything sensitive snipped out?

ryan-lane commented 7 years ago

If you set DEBUG=True for the environment for metadataproxy, it'll show you better debug output.

ryan-lane commented 7 years ago

One thing to note is that some of your calls are incorrect:

this:

curl -vvvv http://169.254.169.254/iam/security-credentia
ls/read-s3-db-backups

should be:

curl -vvvv http://169.254.169.254/latest/meta-data/iam/security-credentia
ls/read-s3-db-backups
garceri commented 7 years ago

I found the issue and i'm working on a fix. Issue: Our environment runs under Rancher 1.3, there has been changes to Rancher since version 1.2 that messed around the Networking container metadata metadataproy depends on. http://docs.rancher.com/rancher/v1.3/en/rancher-services/networking/

DIFFERENCES FROM PREVIOUS RELEASES When using Rancher’s IPsec networking prior to the 1.2 release, a container in the managed network would be assigned with both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. With the adoption of the CNI framework, any container launched in managed network will only have the Rancher managed IP (default subnet: 10.42.0.0/16).

IMPLICATIONS OF USING CNI The Rancher managed IP address will not be present in Docker metadata, which means it will not appear in docker inspect. Certain images may not work if it requires a Docker bridge IP. Any ports published on a host will not be shown in docker ps as Rancher manages separate IPtables for the networking.

The solution i'm working on uses additional container metadata under Config.Labels.io.rancher.container.ip if it can't match the IP address on the normal metadata.

garceri commented 7 years ago

Setting DEBUG=True has no effect, absolutely nothing gets on the logs, i read that gunicorn/flask needs to be configured to support logging to stdout.

ryan-lane commented 7 years ago

Doh. Do you mind also sending in a PR for the gunicorn changes necessary? The default should be reasonable logging.

garceri commented 7 years ago

I need to check on this, will probably take some time

From: Ryan Lane [mailto:notifications@github.com] Sent: Tuesday, February 7, 2017 3:33 PM To: lyft/metadataproxy metadataproxy@noreply.github.com Cc: pythianarceri arceri@pythian.com; Comment < comment@noreply.github.com> Subject: Re: [lyft/metadataproxy] metadataproxy not returning IAM role credentials to containers (#33)

Doh. Do you mind also sending in a PR for the gunicorn changes necessary? The default should be reasonable logging.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lyft/metadataproxy/issues/33#issuecomment-278096756, or mute the thread https://github.com/notifications/unsubscribe-auth/AKB8kGpkW_iHOdRm4F6qi63IXtkOGVniks5raLj4gaJpZM4L4ggX .

--

--

ryan-lane commented 7 years ago

Logging issues fixed for debug in 1.2.0, via PR https://github.com/lyft/metadataproxy/pull/38