confluentinc / kafka-connect-jdbc

Kafka Connect connector for JDBC-compatible databases
Other
8 stars 954 forks source link

WARN JDBC type 100 (BINARY_FLOAT) not currently supported #1002

Open MarcelGor opened 3 years ago

MarcelGor commented 3 years ago

Hi,

i want to ingest data from an oracle db to kafka. Most of the rows have the data typ Binary_Float. I get the warn message WARN JDBC type 100 (BINARY_FLOAT) not currently supported. Any plans to integrate this data type?

Regards, Marcel

picpromusic commented 2 years ago

This worked for me as prototype in OracleDatabaseDialect.

` @Override protected String addFieldToSchema(ColumnDefinition columnDefn, SchemaBuilder builder, String fieldName, int sqlType, boolean optional) { // Handle Oracle specific types first switch (sqlType) { case 100: // Use the same schema definition as a standard Double return super.addFieldToSchema(columnDefn, builder, fieldName, Types.DOUBLE, optional); default: break; }

    // Delegate for the remaining logic to handle the standard types
  return super.addFieldToSchema(columnDefn, builder, fieldName, sqlType, optional);
}

protected ColumnConverter columnConverterFor(final ColumnMapping mapping, final ColumnDefinition defn,
        final int col, final boolean isJdbc4) {
  switch (defn.type()) {
    case 100:
      return rs -> rs.getDouble(col);
    }
    return super.columnConverterFor(mapping, defn, col, isJdbc4);
} 

`

picpromusic commented 2 years ago

We tried to work with VIEWs and Cast to other Column-Types (FLOAT) but unfortunatly ResultSetMetadata always return java.sql.Types.NUMERIC with big precision leading to a big byte[] in the resulting schema. The OJDBC-Driver is somehow misleading this implementation as the DatabaseMetadata::getColumns seems to returns the value java.sql.Types.(FLOAT|REAL|DOUBLE) . But ResultSetMetadata always return NUMERIC with big precision.