Closed chespix closed 7 years ago
Can you show me your metadataproxy configuration, with anything sensitive snipped out?
If you set DEBUG=True
for the environment for metadataproxy, it'll show you better debug output.
One thing to note is that some of your calls are incorrect:
this:
curl -vvvv http://169.254.169.254/iam/security-credentia
ls/read-s3-db-backups
should be:
curl -vvvv http://169.254.169.254/latest/meta-data/iam/security-credentia
ls/read-s3-db-backups
I found the issue and i'm working on a fix. Issue: Our environment runs under Rancher 1.3, there has been changes to Rancher since version 1.2 that messed around the Networking container metadata metadataproy depends on. http://docs.rancher.com/rancher/v1.3/en/rancher-services/networking/
DIFFERENCES FROM PREVIOUS RELEASES When using Rancher’s IPsec networking prior to the 1.2 release, a container in the managed network would be assigned with both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. With the adoption of the CNI framework, any container launched in managed network will only have the Rancher managed IP (default subnet: 10.42.0.0/16).
IMPLICATIONS OF USING CNI The Rancher managed IP address will not be present in Docker metadata, which means it will not appear in docker inspect. Certain images may not work if it requires a Docker bridge IP. Any ports published on a host will not be shown in docker ps as Rancher manages separate IPtables for the networking.
The solution i'm working on uses additional container metadata under Config.Labels.io.rancher.container.ip if it can't match the IP address on the normal metadata.
Setting DEBUG=True has no effect, absolutely nothing gets on the logs, i read that gunicorn/flask needs to be configured to support logging to stdout.
Doh. Do you mind also sending in a PR for the gunicorn changes necessary? The default should be reasonable logging.
I need to check on this, will probably take some time
From: Ryan Lane [mailto:notifications@github.com] Sent: Tuesday, February 7, 2017 3:33 PM To: lyft/metadataproxy metadataproxy@noreply.github.com Cc: pythianarceri arceri@pythian.com; Comment < comment@noreply.github.com> Subject: Re: [lyft/metadataproxy] metadataproxy not returning IAM role credentials to containers (#33)
Doh. Do you mind also sending in a PR for the gunicorn changes necessary? The default should be reasonable logging.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lyft/metadataproxy/issues/33#issuecomment-278096756, or mute the thread https://github.com/notifications/unsubscribe-auth/AKB8kGpkW_iHOdRm4F6qi63IXtkOGVniks5raLj4gaJpZM4L4ggX .
--
--
Logging issues fixed for debug in 1.2.0, via PR https://github.com/lyft/metadataproxy/pull/38
Hi. We have metadataproxy running as a rancher stack. We have setup the firewall rules and we can see our request to 169.254.169.254 are being sent to metadataproxy container, but only the pass-thru proxy seems to work. Anytime we try to get info from the IAM end point, we don't get output at all, or we get a 404.
Is there anyway to enable a debug output in metadataproxy to try and find out whats going on?
More detailed curl output, where we see metadataproxy is taking the request:
Metadataproxy docker-compose.yml :
Application docker-compose.yml :
thanks for your help!