Currently the behavior of the kafka-connect-jdbc connector is to transfer this oid like an integer which is not what I expect as I would like to retrieve the binary content of the large objects not its identifier.
Could you please give me some insight on how to solve this problem? Would it work better if I was using bytea type for Postgresql?
By the way how do you serialize blob when using Json serializer? I couldn't find it in the documentation.
Hi, I am using kafka-connect-jdbc without any special configuration. I have a table which has a column of type 'oid' (https://www.postgresql.org/docs/current/static/datatype-oid.html), which means I am using Large Objects feature of Postgresql.
Currently the behavior of the kafka-connect-jdbc connector is to transfer this oid like an integer which is not what I expect as I would like to retrieve the binary content of the large objects not its identifier.
Could you please give me some insight on how to solve this problem? Would it work better if I was using bytea type for Postgresql?
By the way how do you serialize blob when using Json serializer? I couldn't find it in the documentation.