Open dechants opened 3 months ago
Hello @dechants and sorry for the delay.
Mapping Avro bytes/decimal
into Snowflake VARCHAR
was added in this PR and the reason for doing that was the difference in precision between the types.
There are two solutions I can imagine:
1) Check the value of connect.decimal.precision
and adjust the Snowflake type, however, I don't know if it is possible to access precision from the code easily.
2) Create a parameter that would switch between VARCHAR
and NUMBER
. The risk of precision mismatch would be on the user.
@sfc-gh-xhuang what do you think?
@sfc-gh-mbobowski no worries, thank you for your reply.
Could you please explain both options and provide a configuration example?
The source is a COTS application managed by another team, so there is no chance that I could make changes there. However, I know that the field is a primary key (integer) which is defined as NUMBER
in the source Oracle database without precision and scale. The JDBC source connector is configured with numeric.mapping = best_fit
(Confluent doc).
@dechants The only existing solution to this problem is to create a table on your own instead of leaving it to the connector. You don't have to create every column, just focus on the NUMBER
and let the schema evolution do the rest.
Please let me know if it solves your problem.
@sfc-gh-mbobowski thanks, we will try that.
Hi,
I have the following settings (snowflakeinc/snowflake-kafka-connector:2.2.2):
My schema:
The connector creates the table, however
INCREMENTALKEY
isVARCHAR(16777216)
.How can I make sure that the connector automatically creates the table in Snowflake and "maps" numeric values correctly?