prometheus / jmx_exporter

A process for exposing JMX Beans via HTTP for Prometheus consumption
Apache License 2.0
3.06k stars 1.2k forks source link

JMX exporter does not expose container port externally via the Agent method #889

Closed agitkid closed 1 year ago

agitkid commented 1 year ago

Hello,

I have successfully exposed JMX metrics on my java app, but only internally on the container, in a kubernetes environment. I am trying to read the metrics from a Prometheus pod, but since the port is not exposed externally, Prometheus can't read the metrics. Isn't the agent meant to expose the container port externally?

If not, how do I expose metrics externally using the Agent?

I tried all of these, and none worked: -javaagent:/usr/local/tomcat/jmx_prometheus_javaagent-0.19.0.jar=9119:/usr/local/tomcat/jmx-exporter-config.yaml and -javaagent:/usr/local/tomcat/jmx_prometheus_javaagent-0.19.0.jar=9119:9119:/usr/local/tomcat/jmx-exporter-config.yaml and -javaagent:/usr/local/tomcat/jmx_prometheus_javaagent-0.19.0.jar=0.0.0.0:9119:/usr/local/tomcat/jmx-exporter-config.yaml

Relevant parts of my "jmx-exporter-config.yaml" are: lowercaseOutputLabelNames: true lowercaseOutputName: true whitelistObjectNames: ["java.lang:type=OperatingSystem", "Catalina:*"] blacklistObjectNames: []

I don't have anything in the "jmx-exporter-config.yaml" file that mentions ports, since I am using the agent.

I also tried adding this to our Dockerfile, but no luck: EXPOSE 9119

I can confirm that Prometheus can read the Kubecost app's externally exposed metrics, but not the metrics exposed by JMX_exporter:

Kubecost app metrics exposed externally for Prometheus to scrape: $ nc -v 172.20.182.217 9003 172.20.182.217 (172.20.182.217:9003) open

My App on port 80 (external), for normal app stuff: $ nc -v 172.20.116.214 80 172.20.116.214 (172.20.116.214:80) open

My App on the JMX_exporter port (external) $ nc -v 172.20.116.214 9119 ...just hangs.

Any ideas? Any help is much appreciated. Thanks!

dhoard commented 1 year ago
-javaagent:/usr/local/tomcat/jmx_prometheus_javaagent-0.19.0.jar=9119:/usr/local/tomcat/jmx-exporter-config.yaml

... is the correct configuration. This will tell the JMX Exporter to expose the metrics within the K8s network on port 9119. You can verify that by getting a shell into the running container and testing via nc, curl, wget, etc.

External access (outside of K8s) into the K8s POD requires an Ingress resource.

https://kubernetes.io/docs/concepts/services-networking/ingress/

agitkid commented 1 year ago

dhoard, thank you for your response!

When I said above the word "external", I did not mean to imply outside of the K8s cluster. I meant outside of the container, as opposed to getting a shell on the app container and running:

lynx localhost:9119

That works fine inside the container, but I can't get the metrics from a Prometheus pod. For example, if I get a shell on a Prometheus Pod container, I am unable to get access to the metrics when I run this:

$ nc -v 172.20.116.214 9119 (this just hangs, because it can't connect to port 9119)

On that same Prometheus Pod container, I can hit Kubecost metrics just fine by running a netcat against its port:

$ nc -v 172.20.182.217 9003 172.20.182.217 (172.20.182.217:9003) open

That was what I was trying to illustrate above. This tells me that JMX_exporter is only opening port 9119 internally in the container, not externally. You can see the results of running the following:

$ netstat -tunap |grep 9119 tcp6 0 0 :::9119 :::* LISTEN 1/java

So, the ::: represent IPv6 addresses, and this might explain why it's not listening externally, but I'm not sure about that.

Thanks for any insight here.

agitkid commented 1 year ago

We were able to fix this by following this article, and adjusting a few things in our Helm Chart for our K8s application: https://medium.com/logistimo-engineering-blog/tomcat-jvm-metrics-monitoring-using-prometheus-in-kubernetes-c313075af727

Added to our service.yaml file, the prometheus section: `spec: type: {{ .Values.service.type }} ports:

Added to our values.yaml file: podAnnotations: { prometheus.io/scrape: 'true', prometheus.io/port: '9119' }

Added to our environments file: ` - name: JAVA_OPTS value: '-javaagent:/usr/local/tomcat/jmx_prometheus_javaagent-0.19.0.jar=9119:/usr/local/tomcat/jmx-exporter-config.yaml'

And now this works from our Prometheus pod:

$ nc -v 172.20.116.214 9119 172.20.116.214 (172.20.116.214:9119) open