Open Munaf305 opened 10 months ago
Kafka connector(JDBC source) not considering to build the table.whitelist from additional project from connection.url of simba
JdbcSourceTaskConfig values:( result while deploying to connector) batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = null connection.url = jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=mainproject;OAuthType=0;OAuthPvtKeyPath=C:/kafka/**.json;OAuthServiceAcctEmail=my-test-service-account@b**.com;AdditionalProjects=additoinal-project;QueryDialect=SQL;Dataset=dataset1 connection.user = null db.timezone = UTC dialect.name = GenericDatabaseDialect incrementing.column.name = mode = bulk numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 3600 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [table1] tables = [mainproject.dataset1.table1] ---> this shoule be ---> tables = [additoinal-project.dataset1.table1] - tables.fetched = true timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = test transaction.isolation.mode = DEFAULT validate.non.null = true
mainproject
dataset1
table1
additoinal-project
Kafka connector(JDBC source) not considering to build the table.whitelist from additional project from connection.url of simba
Looking to build the table based on AdditionalProjects parameter instead of considering projectid while building table.whitelist in jdbc configuration. configurations name=test connector.class=io.confluent.connect.jdbc.JdbcSourceConnector connection.url=jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=mainproject;OAuthType=0;OAuthPvtKeyPath=C:/kafka/**.json;OAuthServiceAcctEmail=my-test-service-account@b**.com;AdditionalProjects=additoinal-project;QueryDialect=SQL;Dataset=dataset1 tasks.max=1 table.whitelist=table1 topic.prefix=test poll.interval.ms=3600 mode=bulk
JdbcSourceTaskConfig values:( result while deploying to connector) batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = null connection.url = jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=mainproject;OAuthType=0;OAuthPvtKeyPath=C:/kafka/**.json;OAuthServiceAcctEmail=my-test-service-account@b**.com;AdditionalProjects=additoinal-project;QueryDialect=SQL;Dataset=dataset1 connection.user = null db.timezone = UTC dialect.name = GenericDatabaseDialect incrementing.column.name = mode = bulk numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 3600 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [table1] tables = [
mainproject
.dataset1
.table1
] ---> this shoule be ---> tables = [additoinal-project
.dataset1
.table1
] - tables.fetched = true timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = test transaction.isolation.mode = DEFAULT validate.non.null = true