jaegertracing / helm-charts

Helm Charts for Jaeger backend
Apache License 2.0
269 stars 348 forks source link

[jaeger] cannot connect to external Kafka with authentication #129

Open Arnuphap-Yupuech opened 4 years ago

Arnuphap-Yupuech commented 4 years ago

Background I am trying to integrate Jaeger deployment using Helm in our AWS EKS (kubernetes) with our existing Confluent Kafka on AWS EC2 server in order to use Kafka as storage backend for Jaeger-collector service.

Currently, we are having issue about the authentication between Jaeger and Kafka. Since i configure the value file in Helm chart to include the credentials, but the value for credentials cannot be parsed to connect to Kafka. Here is my configuration.

storage: kafka: brokers:

Problem This configuration can parsed to the containers env variables already, but still cannot be used to connect with this error.

2020/06/24 10:11:25 maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined {"level":"info","ts":1592993485.7167063,"caller":"flags/service.go:116","msg":"Mounting metrics handler on admin server","route":"/metrics"} {"level":"info","ts":1592993485.718103,"caller":"flags/admin.go:120","msg":"Mounting health check on admin server","route":"/"} {"level":"info","ts":1592993485.718164,"caller":"flags/admin.go:126","msg":"Starting admin HTTP server","http-addr":":14269"} {"level":"info","ts":1592993485.7181845,"caller":"flags/admin.go:112","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"} {"level":"info","ts":1592993485.7203314,"caller":"kafka/factory.go:68","msg":"Kafka factory","producer builder":{"Brokers":["kafka-broker1.com:9093","kafka-broker2.com:9093","kafka-broker3.com:9093"],"RequiredAcks":1,"Compression":0,"CompressionLevel":0,"ProtocolVersion":"","BatchLinger":0,"BatchSize":0,"BatchMaxMessages":0,"Authentication":"PlainText","Kerberos":{"ServiceName":"kafka","Realm":"","UseKeyTab":false,"Username":"","Password":"","ConfigPath":"/etc/krb5.conf","KeyTabPath":"/etc/security/kafka.keytab"},"TLS":{"Enabled":false,"CAPath":"","CertPath":"","KeyPath":"","ServerName":"","ClientCAPath":"","SkipHostVerify":false},"PlainText":{"UserName":"","Password":""}},"topic":"jaeger_v1_test"} {"level":"fatal","ts":1592993485.7223234,"caller":"command-line-arguments/main.go:70","msg":"Failed to init storage factory","error":"kafka: invalid configuration (Net.SASL.User must not be empty when SASL is enabled)","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:70\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:762\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v0.0.3/command.go:852\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:800\nmain.main\n\tcommand-line-arguments/main.go:126\nruntime.main\n\truntime/proc.go:203"}

As in the log output, it shows nothing in field "PlainText":{"UserName":"","Password":""} and it said "Net.SASL.User must not be empty when SASL is enabled", which means the parameter that i set cannot be parsed in here.

Any help to suggest would be appreciated.

naseemkullah commented 4 years ago

Hi @Arnuphap-Yupuech, could you please provide values wrapped in yaml code block, properly indented?

naseemkullah commented 4 years ago

Also can you please confirm if you are trying with the jaeger or jaeger-operator chart?

Arnuphap-Yupuech commented 4 years ago

Hi @naseemkullah I am using jaeger Here is correct indent for my values.

storage:
  kafka:
    brokers:
      - kafka-broker1.com:9093
      - kafka-broker2.com:9093
      - kafka-broker3.com:9093
    topic: jaeger_v1_test
    authentication: PlainText
    extraEnv:
      - name: UserName
        value: redacted
      - name: Password
        value: redacted
naseemkullah commented 4 years ago

Hi @naseemkullah I am using jaeger Here is correct indent for my values.

storage:
  kafka:
    brokers:
      - kafka-broker1.com:9093
      - kafka-broker2.com:9093
      - kafka-broker3.com:9093
    topic: jaeger_v1_test
    authentication: PlainText
    extraEnv:
      - name: UserName
        value: redacted
      - name: Password
        value: redacted

Thanks, could you please try:

storage:
  kafka:
    brokers:
      - kafka-broker1.com:9093
      - kafka-broker2.com:9093
      - kafka-broker3.com:9093
    topic: jaeger_v1_test
    authentication: plaintext
    extraEnv:
      - name: KAFKA_PRODUCER_PLAINTEXT_USERNAME
        value: redacted
      - name: KAFKA_PRODUCER_PLAINTEXT_PASSWORD
        value: redacted

extraEnv will be passed to kafka client, in this case collector, alternatively, the env vars could be passed in collector's extraEnv btw.

Arnuphap-Yupuech commented 4 years ago

Thank you for helping @naseemkullah Now the variable can be parsed already with the PlainText authentication mode.

One more question. If i change authentication mode to be TLS, what parameters are needed to be configured?

I tried something like this and it cannot be passed

storage:
  kafka:
    brokers:
      - kafka-broker1.com:9093
      - kafka-broker2.com:9093
      - kafka-broker3.com:9093
    topic: jaeger_v1_test
    authentication: tls
    extraEnv:
      - name: KAFKA_PRODUCER_TLS_CAPATH 
        value: "/var/ssl/private/kafka_broker.truststore.jks"
      - name: KAFKA_PRODUCER_TLS_CERTPATH 
        value: "/var/ssl/private/kafka_broker.keystore.jks"
                             ------
naseemkullah commented 4 years ago

@Arnuphap-Yupuech please see https://www.jaegertracing.io/docs/1.18/cli/ for cli flags which when made upper case and periods replaced with underscores provides the env vars you can use. Seems like those would be KAFKA_PRODUCER_TLS_CA, KAFKA_PRODUCER_TLS_CERT and KAFKA_PRODUCER_TLS_KEY

Arnuphap-Yupuech commented 4 years ago

@naseemkullah Thank you for your help, i can parse the configuration already. Unfortunately, i still facing the issue to apply the certificate of our Kafka. We are using Confluent Kafka with SASL_SSL that using truststore.jks and keystore.jks. After i apply that, it said fail to parse CA (truststore.jks) that i include in the config with this error: "TLS":{"Enabled":true,"CAPath":"/usr/share/extras/kafka_client.truststore.jks","CertPath":"/usr/share/extras/kafka_client.keystore.jks","KeyPath":"","ServerName":"","ClientCAPath":"","SkipHostVerify":true},"PlainText":{"UserName":"","Password":""}},"topic":"jaeger_v1_test"} {"level":"fatal","ts":1593588560.471767,"caller":"command-line-arguments/main.go:70","msg":"Failed to init storage factory","error":"error loading tls config: failed to load CA CertPool: failed to parse CA /usr/share/extras/kafka_client.truststore.jks","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:70\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:762\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v0.0.3/command.go:852\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:800\nmain.main\n\tcommand-line-arguments/main.go:126\nruntime.main\n\truntime/proc.go:203"}

Here is my value file for configuration:

  kafka:
    brokers:
      - kafka-broker1.com:9093
      - kafka-broker2.com:9093
      - kafka-broker3.com:9093
    topic: jaeger_v1_test
    authentication: tls
    extraEnv:
      - name: KAFKA_PRODUCER_TLS_CA
        value: "/usr/share/extras/kafka_client.truststore.jks"
      - name: KAFKA_PRODUCER_TLS_CERT
        value: "/usr/share/extras/kafka_client.keystore.jks"
      - name: KAFKA_PRODUCER_TLS_SKIP_HOST_VERIFY
        value: "true"
ingester:
  enabled: true
  extraSecretMounts:
    - name: confluent-keystore
      mountPath: /usr/share/extras
      subPath: ""
      secretName: confluent-keystore
      readOnly: true
                             ------

How can i adapt this Confluent Keystore to use with Jaeger?

naseemkullah commented 4 years ago

HI @Arnuphap-Yupuech , for one it seems like value for KAFKA_PRODUCER_TLS_KEY is missing, perhaps you can use the value of keystore.jks directly for it, but maybe it has to be split into cert and key.

@jaegertracing/jaeger-maintainers can you please confirm how to mount jks keystore in the ingester?

jpkrohling commented 4 years ago

We can't read JKS in the ingester. What we do when auto-provisioning Kafka via Strimzi is to get the cert data from the secret created based on our KafkaUser CR:

https://github.com/jaegertracing/jaeger-operator/blob/1dce126cdc3aa0f1582d53a6d62fcb359c34586c/pkg/strategy/streaming.go#L217-L249

naseemkullah commented 4 years ago

Thanks @jpkrohling , as such @Arnuphap-Yupuech would have to have the jks converted to pem format as per https://docs.cloudera.com/documentation/enterprise/5-10-x/topics/cm_sg_openssl_jks.html#concept_ek3_sdl_rp, and then get the PEM format into k8s, is that it?

jpkrohling commented 4 years ago

Either that, or change the cert provisioning logic to produce both a Kubernetes secret and a Java Keystore.

naseemkullah commented 4 years ago

any luck with this @Arnuphap-Yupuech ? If possible, README could use some instructions if you've succeeded.