Open andyhuynh3 opened 7 months ago
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
I don't understand why we need to add an option ignore.default.for.nullables
here
do you know why the deser of the confluent-schema-registry lib with the option value.converter.ignore.default.for.nullables
is not working ?
I do not, but I suppose it's similar to why this PR is in place to expose the scrub.invalid.names
config.
The config takes and does work with producers (e.g. Debezium), but I wasn't able to get it working with the S3 sink until I introduced the changes in this PR
Problem
https://github.com/confluentinc/schema-registry/pull/2326 introduced the
ignore.default.for.nullables
Avro converter config property. However the storage connectors currently cannot take advantage of it as it's not an exposed config. For example, when using the S3 sink connector, null values are still being replaced with defaults as detailed in this issue. Because this config is currently not exposed,ignore.default.for.nullables
will always come in with the default offalse
:Solution
Expose the the
ignore.default.for.nullables
option so that it can be configured.Does this solution apply anywhere else?
If yes, where?
Test Strategy
I rebuilt the
kafka-connect-storage-core-11.2.4.jar
with the included changes in this PR, then ran some manual test with the S3 connector to confirm that the option takes. Here's what my S3 sink settings look like:After starting the connector, I see that the
ignore.default.for.nullables
setting was correctly applied based on the logs below:Testing done:
Release Plan