Open LykinsN opened 4 years ago
Hi @LykinsN, another option to specifying IP addresses in the DD_PROXY_NO_PROXY
or NO_PROXY
env vars would be to set skip_proxy: true
in the agent check's configuration, like in this example: https://github.com/DataDog/integrations-core/blob/5d992e4cabad3f5141ebe31b8c778a0aaf459e79/kyototycoon/datadog_checks/kyototycoon/data/conf.yaml.example#L28.
If you are still running into issues configuring the proxy settings for those checks, can you open a ticket with the support team: support@datadoghq.com
This seems to be still happening.
I was blocked on this too, and here's maybe something that might unblock you:
DD_PROXY_NO_PROXY
variable can be replaced with DD_NO_PROXY
. This was suggested by support, however, I have not tried this.skip_proxy
flag that can be specified in the YAML config file to ignore/skip the configured proxy settings. I ended up using this for almost all my integrations and this has worked for me.
Output of the info page (if this is a bug)
Describe what happened:
I've been working to set up the full suite of Datadog Kubernetes containers, for the sake of enabling trace and process monitoring within our cluster. I've been able to get the containers implemented and running but, when checking the agent logs, I'm seeing the below traces repeating every few seconds:
Our cloud infrastructure is running behind a proxy, and the "Target service not allowed" seems to be a trace returning from Sophos, our proxy solution. It seems as though the Python execution is not inheriting the proxy settings being defined for the container. To be clear, I've added every combination of proxy configuration details that I can think of, to each Kubernetes manifest, but without any success:
If I add each individual IP address to the no_proxy fields within the container, I can successfully curl the endpoints but even doing this does not enable the above checks to work successfully. It seems as though the Python invocations within Datadog are not inheriting the values from DD_PROXY_NO_PROXY and it seems as though there might be a conflict in inheriting the subnet CIDR ranges especially. I've been able to manually add each IP address from our cloud subnets to the no_proxy fields without any success. However, this isn't practical in trying to allow the Kubernetes cluster CIDR since the subnet range spans an extremely large number of addresses.
I don't see any other network or proxy related errors within the logs, and all other connectivity native to the Go runtimes seem to be working. It's only the Python related execution that seems to be suffering from this issue.
Describe what you expected:
When running the containers with the above proxy configuration, I'd expect all network checks both within the Kubernetes cluster CIDR and within the allowed cloud subnet to communicate successfully.
Steps to reproduce the issue:
Deploy the Datadog agent containers to a cluster running within a proxied environment, and confirm that the no_proxy settings being declared in the manifests are not sufficient for enabling traffic to flow.
Additional environment details (Operating System, Cloud provider, etc):
Running within AWS, in a proxy controlled network space. The above behavior was encountered on datadog/agent:7.20.2