Closed kashyapyv closed 9 months ago
Thank you for using the connector.
The information is good news, i.e., the events are being sent to EventBridge. So we need to look into to why you're not seeing them on the EventBridge side. This could have multiple reasons:
1) The bus name/ARN is wrong (PutEvents doesn't fail if the bus doesn't exist) 2) the rule filter is not matching any event 3) the target is invoked but there's an error and you don't have a dead-letter queue or a high retry count which backs off
Let's troubleshoot those topics together here.
Hey @embano1, thanks for the quick reply.
We have verified the following information:
{ "name": "request-test-eventBridge_Sink_connect", "config": { "connector.class": "software.amazon.event.kafkaconnector.EventBridgeSinkConnector", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "errors.retry.timeout": "-1", "errors.tolerance": "all", "errors.log.enable": "true", "errors.log.include.messages": "true", "topics": "request-test.in", "errors.deadletterqueue.topic.name": "request-test.dlq", "errors.deadletterqueue.topic.replication.factor": "1", "aws.eventbridge.connector.id": "####.########", "aws.eventbridge.region": "us-west-2", "aws.eventbridge.eventbus.arn": "arn:aws:events:us-west-2:4########:event-bus/c#########s", "aws.eventbridge.retries.max": "2", "aws.eventbridge.retries.delay": "200", "value.converter.schemas.enable": "false" } }
Can you please post your rule/filter here? I still think it's a filter issue then if PutEvents succeeds and no message is in the DLQ.
You say you tested with messages on the bus. Which event payload did you use? Because the connector uses its own schema perhaps the filter is not matching?
Also, on the rule in EventBridge you can configure a DLQ. Perhaps it's an invocation issue on the bus target?
@kashyapyv can you please report back? Do you want me to keep this issue open or shall we close it?
@embano1 Sorry for the delay we were having a few issues sharing the current filters, so we are creating a different eventbus for this and get back to you. We use a json file as the payload. We had also faced an issue with credentials provider. We had originally been using .aws/config and credential files to provide the aws_access_key_id and aws_secret_access_key, but kept getting an error stating "unable to load credentials from any providers in the chain". so we instead started using an environment variable; We would like to use the credential file if possible as we will be using multiple such profiles, so could you help us with that?
Yes, understood regarding the sensitivity of information. Once you share we can troubleshoot the filter. Again, please also set up a DLQ on the target so any downstream issue get reported into the DLQ (set retries to 0 on the target in the rule).
Regarding credentials: are you running the connector in a container or virtual machine? We use the default credentials provider unless IAM roles are specified: https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html
We use a virtual machine; regarding default credentials provider, https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/auth/credentials/DefaultCredentialsProvider.html states we can use Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI, is this profile file different from the credentials file in ~/.aws/credentials?
Here you can see how the default credentials provider is created (L34...): https://github.com/awslabs/eventbridge-kafka-connector/blob/main/src/main/java/software/amazon/event/kafkaconnector/auth/EventBridgeCredentialsProvider.java
And it uses the AWS SDK Java v2, hence it should behave as described in the Java doc. I haven't tried with a credentials file though, always tested with environment variables.
How does your profile look like? If you don't have a default profile, did you specify one with the environment variable as described here: https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials-profiles.html#set-a-custom-profile-as-the-default
yes we have set the environment variable using export command as was shown there. as for the .aws/credential file it looks like this: [default] aws_acccess_key_id = abcdefg aws_secret_access_key = gfedcba
this had worked when sending an event via the aws cli but isn't working when we use the kafka connector
Also we would like to add more profiles so we would like to not depend on a default environment variable
Ok, let me verify the credentials behavior then as well! I'll report back in a couple days later. Right now at least you have a workaround.
Remains the event delivery issue for now...
I successfully verified that AWS configuration profiles work!
Steps:
credentials
and config
files into your Kafka Connect host(s)AWS_CONFIG_FILE
and AWS_SHARED_CREDENTIALS_FILE
accordingly to point to these filesLet me know if this helps!
@embano1 - This is working now.(the connector doesn't show an error like it previously did). Will need to test with more profiles later. Will update if the events are being posted successfully on testing.
Hey @embano1, We added another profile and created another connector for this user, but the connector picks up credentials for the first user and we get this exception in the connector. EventBridgeException: User: arn:aws:iam::123123123:user/user1 is not authorized to perform: events:PutEvents on resource: arn:aws:events:us-west-2:##########:event-bus/namexyz-dev-us-west-2-eventbus-xyz because no resource-based policy allows the events:PutEvents action.
The config file currently looks like: [user1] aws_acccess_key_id = abcdefg aws_secret_access_key = gfedcba [user2] aws_acccess_key_id = abcdefg aws_secret_access_key = gfedcba
How do we make the connector also check user 2 config from the config file?
Hi, unfortunately again this is merely a Kafka Connect question and not specific to our connector. The reason is, I guess, that you use a VM to run Kafka Connect and its connectors. They all see the same environment variables when they are started (by Kafka Connect) and thus apply globally.
I have no experience with this specific ask. Is there a Kafka Connect expert you could ask, i.e., how to pass specific properties to a connector (sink) and not apply them globally? Here it is described how to do this via environment variables and Java properties. You could try just setting this property in the connector config and see if it works?
Can you also please report back whether, for the default or custom profile, you were now able to receive messages in an EventBridge target?
@kashyapyv I spoke with @agebhar1 and looks like today this is not possible. I have created a separate issue. Can you please take a look if that would work for you and whether the proposed configuration is user-friendly?
@embano1 - That configuration looks good for us, thank you. As for the defualt profile, we will test it again soon after some changes are made to the event bridge rules. Will keep you posted.
@embano1 - That configuration looks good for us, thank you.
As for the defualt profile, we will test it again soon after some changes are made to the event bridge rules. Will keep you posted.
Great! The configuration option will have a .name
suffix to specify the profile name.
I have a PR almost ready. Would you know how to check out a PR code and create a custom build of the connector to test the new feature?
Here's the PR with the new feature and updated README in case you're interested to try it out https://github.com/awslabs/eventbridge-kafka-connector/pull/197
@embano1 - What permissions or rules is the user expected to have to the EventBus?Right now we only have put event perssion on the user and the message are being send from the source connector and there is nothing being blocked by the firewall, but the messages are missing from the eventbus. The eventbus only has one filter which checks the source of the message.
Also, Will this profile name config change be added to confluent page, if not when can it be merged?
What permissions or rules is the user expected to have to the EventBus?Right now we only have put event perssion on the user and the message are being send from the source connector and there is nothing being blocked by the firewall, but the messages are missing from the eventbus.
If I understood correctly, IAM is not an issue here since you're seeing positive log messages in the connector showing that it was able to send the event to EventBridge, right? In case of auth/IAM issues, the connector would actually throw an exception highlighting it cannot send to EventBridge due to insufficient permissions. PutEvents
action is perfectly sufficient.
I am happy to jump on a call and troubleshoot with you if needed, just let me know.
Also, Will this profile name config change be added to confluent page, if not when can it be merged?
You mean to the Confluent Connector Hub? We have some features planned before the next release, which will then be available on Confluent as well. In the meantime, you can always build from source if needed.
@embano1 - We were able to post events to eventbridge successfully.
@embano1 - We were able to post events to eventbridge successfully.
Fantastic! Can you elaborate what the issue was?
Btw: I'm going to create a new release today containing the profile feature!
@kashyapyv FYI, v1.2.0
with support for custom profiles is live on Confluent Hub https://www.confluent.io/hub/aws/kafka-eventbridge-sink
@embano1 The issue might have been with firewalls, and resource based policy. We were testing out a few things.
Will update the connector and let you know on the profile feature
@kashyapyv great! Glad it's working! Leaving this issue open until you report back regarding the profiles.
@kashyapyv can you please report back whether the latest release with profile support fixes all your issues so we can close this issue? Will close this one as "completed" if we don't hear back from you within a week.
@embano1 - Sorry for the late reply, The connector is working fine with multiple profiles. Thanks a lot for your help!!
Hi Team, The Sink Connector is working but the events are not being posted to EventBridge. The connector is created and is running without any errors, we enabled TRACE and analyzed the logs which show that the EventID has been sent successfully; but we do not receive these events on EventBridge.