jcustenborder / kafka-config-provider-aws

Kafka Configuration Provider for AWS Secrets Manager
23 stars 16 forks source link

The plugin is not translating secrets from Secret Manager Service correctly #11

Open koren-at-fundbox opened 1 year ago

koren-at-fundbox commented 1 year ago

I'm not sure if this is a real issue or a misconfiguration. Our stack includes:

We have a secret in Secret manager Service called /testing/cdc_mysql_secrets with the value: {"username":"***","password":"***"}

We are Posting a new connector configuration for Debezium with the following configuration (this is a partial config of course)

    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "database.user": "${secretManager:/testing/cdc_mysql_secrets:username}",
    "database.password": "${secretManager:/testing/cdc_mysql_secrets:password}",
    "config.providers.secretManager.class": "com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider",
    "config.providers": "secretManager",
    "config.providers.secretManager.param.aws.region": "us-east-1"

The HTTP POST action uses the /connectors/ REST API endpoint and responds with the following ERROR message: {"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect: Access denied for user '${secretManager:/testing/cdc_mysql_secrets:username}'@'IP' (using password: YES)\nYou can also find the above list of errors at the endpoint/connector-plugins/{connectorType}/config/validate"}[ec2-user@ip ~]$ curl -i -X GET -H "Accept:application/json" -H "Content-Type:application/json" ***.elb.amazonaws.com:****/connectors/

When calling the /connector-plugins/{connectorType}/config/validate API endpoint I see the same error in the database.host config object.

NOTE: replacing the username and password with the actual credentials as plain text just works fine. we also have a local environment in which the issue is reproduced and we've placed some debug logs. We can confirm that the method public ConfigData get(String p, Set<String> keys) returns a ConfigData object with a map that looks as follow: {"username":"***","password":"***"}. Also the print in com/github/jcustenborder/kafka/config/aws/SecretsManagerConfigProvider.java:78 shows that the plugin code gets the correct arguments.

We would love to get some help on that matter, Thanks!

jcustenborder commented 1 year ago

@koren-at-fundbox What happens if you just send the config vs validate it?

koren-at-fundbox commented 1 year ago

@jcustenborder when sending the config, we get the following message: {"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect: Access denied for user '${secretManager:/testing/cdc_mysql_secrets:username}'@'IP' (using password: YES)\nYou can also find the above list of errors at the endpoint /connector-plugins/{connectorType}/config/validate"}[ec2-user@ip ~]$ curl -i -X GET -H "Accept:application/json" -H "Content-Type:application/json" ***.elb.amazonaws.com:****/connectors/

When calling to the validate API, the response explaining each configuration of Debezium, and the property named database.host include the same error message (unlike other properties) as written above.

BTW we were able to overcome this issue, by setting the plugin configuration as ENV variable in the Kafka connect docker image, according to the conversion rules, for example: config.providers.secretManager.param.aws.region was converted to CONNECT_CONFIG_PROVIDERS_SECRETMANAGER_PARAM_AWS_REGION

jcustenborder commented 1 year ago

ahh maybe try "config.providers": "secretmanager" and convert everything to secretmanager

JavierMonton commented 1 year ago

@koren-at-fundbox were you able to solve the issue? We are having the same issue, replacing the variables with credentials as plain text works fine, but it fails using the configProvider. I can see that the secret has been retrieved from AWS but it doesn't seem to be replaced. In our case we are using MSK Connect + Snowflake Connector, but the issue seems to be the same.

jcustenborder commented 1 year ago

@JavierMonton How are you deploying? Are you using docker?

koren-at-fundbox commented 1 year ago

@JavierMonton as mentioned above, we were able to overcome this issue, by setting the plugin configuration as ENV variable in the Kafka connect docker image, according to the conversion rules, for example: config.providers.secretManager.param.aws.regionwas converted to CONNECT_CONFIG_PROVIDERS_SECRETMANAGER_PARAM_AWS_REGION

@jcustenborder I tried "config.providers": "aws" and converted everything to was - it didn't help.

igorvoltaic commented 1 year ago

Having the same issue. I've tried setting config.providers.secretManager.param.aws.region (both lowerCamel and lowercase) config via CONNECT_CONFIG_PROVIDERS_SECRETMANAGER_PARAM_AWS_REGION env var, but that did not help

Works with file provider though org.apache.kafka.common.config.provider.FileConfigProvider

{
  "error_code": 400,
  "message": "Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect: Access denied for user '${secretManager:dev/test_secret/MYSQ'@'IP' (using password: YES)\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"
}
JavierMonton commented 1 year ago

Thanks both for the quick reply. In our case we are using AWS MSK Connect so we don't have access to the machine or dockers that AWS is using, and I don't think we have a way to change environment variables either.

I was able to add a few extra logs and recompile this configProvider to check what was happening, and it's actually working fine, the keys from AWS Secrets Manager are retrieved properly and returned, but the replacement doesn't happen, so I don't think the issue is in this configProvider.

I finally solved it in a different way, I've seen some people complaining that the field validations occur before the replacement of the secrets, and It's also curious that the Snowflake connector that we are using, is skipping validations if the string starts with ${file: (see code), but file is a word that you can change, here we were using secretManager.

I changed it to file, I guess the validations are skipped and now it's working fine.

igorvoltaic commented 1 year ago

I've managed to create a debezium connector using environment variable configurations.

CONNECT_CONFIG_PROVIDERS_SECRETS_PARAM_AWS_REGION=
CONNECT_CONFIG_PROVIDERS_SECRETS_CLASS=
CONNECT_CONFIG_PROVIDERS=secrets

Unfortunately It is possible to use those only with entrypoint script upon cluster startup. If I try to create/modify a connector on a running cluster with curl -XPUT ... the request fails with error mentioned earlier. For the debezium plugin config provider name does not matter (i.e. aws, secretsmanager, file, etc.)

Updates using file provider 'org.apache.kafka.common.config.provider.FileConfigProvider' works without any issues.

This makes me wonder if validation can be happening while provider still gets the secrets and not replaced anything yet?

koren-at-fundbox commented 1 year ago

@JavierMonton in the case of AWS MSK Connect you must create a resource called worker configuration (different than connector configuration) and set the following params:

key.converter= value.converter= config.providers.secretManager.class=com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider config.providers=secretManager config.providers.secretManager.param.aws.region=

see here: https://docs.aws.amazon.com/msk/latest/developerguide/mkc-debeziumsource-connector-example.html

JavierMonton commented 1 year ago

I tried that but didn't work either. Not sure if the problem is related to the Snowflake Connector. It was solved with the file:// so I haven't tried anything else. But thanks for the help!

lincoln42 commented 1 year ago

@jcustenborder I am facing the problem above when attempting to use the aws secrets manager configuration provider with the MongoDB Source Connector. The validation appears to happen before the replacement. I have tried the various approaches above like lowercasing the provider name etc. but none have worked. Any idea what could be the cause of these problems? Could this be a regression of this Kafka Connect Issue? https://github.com/confluentinc/kafka-connect-jdbc/issues/737

I have raised an issue on Kafka Connect: https://github.com/confluentinc/kafka-connect-jdbc/issues/1319