Closed millin closed 3 years ago
Hello @millin,
Thanks for the PR! It looks good!
Hey @baunz,
Thanks for reviewing, it is much appreciated!
Hello @millin,
I have refactored and added an integration test for your changes. Please have a look and feel free to change them (It needed updates to include Avro record producer).
Please also address @baunz suggestions, there are seem to some test failures (from first glance related to empty rows, out of bounds errors). Unfortunately, CI is not run for external pull requests. But you can run scripts/ci.sh
locally before pushing, it is same as in CI excluding only Sonar checks.
Thanks for your contributions!
Fixes #51
Hello, I have applied suggestions. I'm working on test errors now
@morazow I had two options to correct the tests: ignore the absence of meta or mock it. I decided that it's better to mock. I also corrected new test for timestamp, since it did not pass for me due to different time zones.
Locally I have only one failed test run throws if it cannot create KafkaConsumer
,
but looks like it doesn't fail due to my changes:
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'schema.registry.url' was supplied but isn't a known config.
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 6.2.0-ccs
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 1a5755cf9401c84f
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1630394054721
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-EXASOL_KAFKA_UDFS_CONSUMERS-5, groupId=EXASOL_KAFKA_UDFS_CONSUMERS] Connection to node -1 (kafka01.internal/127.0.0.1:9092) could not be established. Broker may not be available.
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-EXASOL_KAFKA_UDFS_CONSUMERS-5, groupId=EXASOL_KAFKA_UDFS_CONSUMERS] Bootstrap broker kafka01.internal:9092 (id: -1 rack: null) disconnected
...
** HERE ARE MANY IDENTICAL LINES AS ABOVE AND BELOW **
...
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-EXASOL_KAFKA_UDFS_CONSUMERS-5, groupId=EXASOL_KAFKA_UDFS_CONSUMERS] Connection to node -1 (kafka01.internal/127.0.0.1:9092) could not be established. Broker may not be available.
[pool-1-thread-1-ScalaTest-running-KafkaTopicMetadataReaderIT] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-EXASOL_KAFKA_UDFS_CONSUMERS-5, groupId=EXASOL_KAFKA_UDFS_CONSUMERS] Bootstrap broker kafka01.internal:9092 (id: -1 rack: null) disconnected
[info] - run throws if it cannot create KafkaConsumer *** FAILED ***
[info] Expected exception com.exasol.cloudetl.kafka.KafkaConnectorException to be thrown, but org.apache.kafka.common.errors.TimeoutException was thrown (KafkaTopicMetadataReaderIT.scala:70)
Hey @millin,
Thanks for the changes! I am going to check it soon.
Hello @morazow,
Thanks for the fix, code looks better :+1: Let me know if I need to fix anything else.
Adding the ability to use TIMESTAMP as the output column type for any field containing milliseconds Unix epoch timestamp (incl. message timestamp).
It was previously impossible to use TIMESTAMP as output column type
results in an error