Closed quotient-vthakur closed 3 years ago
Hey good question, so that client.properties file is meant to be a valid file for connections to the "interbroker listener" no matter what security is on it. I think I selected the sasl_plain_users.admin.principal
in the case of rbac because I knew it'd also be a super.user
Are you hoping to make use of that file for your clients or is this just something that caught your eye? I agree for external kafka clients I would not use sasl_plain_users.admin.principal
Thanks Domenic, I'm not sure if I understood clearly "in the case of rbac because I knew it'd also be a super.user". Yes I would like to share cred to the developers who are writing clients for the cluster. And I believe sasl_plain_users.admin will have more privileges compared to sasl_plain_users.client.
Ya so by design this file wasn't meant for external clients. But I think a nice feature add to cp-ansible would be a role that creates client configs on client hosts for you. IE creates a functioning keystore/truststore and a client.properties Those are just the first thing that comes to mind. What else do your clients need?
If you don't know about RBAC, I'd check out our official docs - Its essentially our improved ACLS. But when you have rbac (or acls) enabled and you use a client.properties file to request info into kafka (ie get URPs). That user needs to have permissions to access those resources... and thus a super.user would be able to access things.
Ok, thanks for the explanation. So for now this client.properties as you mentioned is interbroker i.e. broker-1 to broker-2 and vice-versa, right? And for client I can share client cred which are specified as a part of sasl_plain_users.client. Am i getting it right?
So it kinda depends on the listener, like the client properties needed to connect to the interbroker listener can be different than another listener (if they have diff security settings).
I would say if you are just setting sasl_protocol: plain
in you group_vars, then ya all you need to do is replace the principal.
Sure, classic. Basically two steps, if possible please confirm and then I can share those client cred to the devs
kafka_broker_client_properties: "{{ kafka_broker_listeners[kafka_broker_inter_broker_listener_name] | client_properties(ssl_enabled, fips_enabled, ssl_mutual_auth_enabled, sasl_protocol, '', kafka_broker_truststore_path, kafka_broker_truststore_storepass, kafka_broker_keystore_path, kafka_broker_keystore_storepass, kafka_broker_keystore_keypass, false, sasl_plain_users.admin.principal, sasl_plain_users.admin.password, sasl_scram_users.admin.principal, sasl_scram_users.admin.password, kerberos_kafka_broker_primary, kafka_broker_keytab_path, kafka_broker_kerberos_principal|default('kafka'), false, kafka_broker_ldap_user, kafka_broker_ldap_password, mds_bootstrap_server_urls) }}"
to kafka_broker_client_properties: "{{ kafka_broker_listeners[kafka_broker_inter_broker_listener_name] | client_properties(ssl_enabled, fips_enabled, ssl_mutual_auth_enabled, sasl_protocol, '', kafka_broker_truststore_path, kafka_broker_truststore_storepass, kafka_broker_keystore_path, kafka_broker_keystore_storepass, kafka_broker_keystore_keypass, false, sasl_plain_users.**client**.principal, sasl_plain_users.**client**.password, sasl_scram_users.admin.principal, sasl_scram_users.admin.password, kerberos_kafka_broker_primary, kafka_broker_keytab_path, kafka_broker_kerberos_principal|default('kafka'), false, kafka_broker_ldap_user, kafka_broker_ldap_password, mds_bootstrap_server_urls) }}"
So... I generally advise against editing our source code, because you will be in a fork/unsupported situation.
BC you have to move that client.properties to other hosts (right?), if I were you, I would just use sed
to create a new file and scp that one around. Something like:
cat /etc/kafka/client.properties | sed <whatever> > external-client.properties
WDYT?
Alternatively, create a "confluent.kafka_client" role that creates client.properties files on your client hosts and I'll review a PR :)
Agree, I would use sed and add client cred to it. Question on how authentication will happen in below scenario.
So it really depends on what the client is. I use the client.properties with the kafka-topics
shell command. And in the case of the kafka-topics
shell command, it does not matter what the path is... you just need to pass the path as an argument. Heres an example of another bash command:
kafka-console-consumer --bootstrap-server kafka-broker1:9091 --consumer.config /tmp/client.props --from-beginning --topic test-topic
In this case the client.props is in /tmp/
You may have a kafka streams app that will have its own properties file, and these props need to be inserted into its properties file.
Ok, i think i should have mentioned this in the beginning. The clients are simple java based producers and consumers. It uses kafka_client_jaas.conf file and in that file we have below stanza KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="someuname" password="somepwd"; };
Referring to this, should I share admin cred or you suggest to use client ones but in that case i need to make changes in cp-ansible and re-run it. Hope I'm not confusing more.
Umm well if external-client.properties
is not correctly formatted, the easiest solution (I think) is just to tell them what the username/password is, which can be found here: https://github.com/confluentinc/cp-ansible/blob/6.1.1-post/roles/confluent.variables/defaults/main.yml#L980-L981
Done, thanks a lot Domenic. Have a good one.
For Confluent Enterprise Support customers, we would strongly advise you to open a Support ticket which will be addressed within your Support contract SLA at
https://support.confluent.io
Describe the issue From branch 6.1.1-post, we enabled SASL by uncommenting sasl_protocol: plain. On brokers server.properties file it added property as listener.name.internal.plain.sasl.jaas.config and listener.name.broker.plain.sasl.jaas.config. In client.properties it added sasl.jaas.config. When i checked the content of sasl.jaas.config, i found it is using admin credentials (refer to https://github.com/confluentinc/cp-ansible/blob/6.1.1-post/roles/confluent.variables/defaults/main.yml sasl_plain_users.admin). I was hoping it should use sasl_plain_users. client. Is this expected? In file https://github.com/confluentinc/cp-ansible/blob/6.1.1-post/roles/confluent.variables/vars/main.yml I found below
kafka_broker_client_properties: "{{ kafka_broker_listeners[kafka_broker_inter_broker_listener_name] | client_properties(ssl_enabled, fips_enabled, ssl_mutual_auth_enabled, sasl_protocol, '', kafka_broker_truststore_path, kafka_broker_truststore_storepass, kafka_broker_keystore_path, kafka_broker_keystore_storepass, kafka_broker_keystore_keypass, false, sasl_plain_users.admin.principal, sasl_plain_users.admin.password, sasl_scram_users.admin.principal, sasl_scram_users.admin.password, kerberos_kafka_broker_primary, kafka_broker_keytab_path, kafka_broker_kerberos_principal|default('kafka'), false, kafka_broker_ldap_user, kafka_broker_ldap_password, mds_bootstrap_server_urls) }}"
This has reference to admin cred. Kindly let me know if this is intentional.