confluentinc / kafka-connect-hdfs

Kafka Connect HDFS connector
Other
11 stars 396 forks source link

kafka-connect-hdfs upon thrift server,instead of hive metastore #116

Open lakeofsand opened 8 years ago

lakeofsand commented 8 years ago

In some spark cluster,there will no hive metastore deployed, but only a thrift server upon spark engine. We should consider to support kafka-connect-hdfs in this scenario.

I try to modify locally,with not so much change,it works well. (but so far,schema change is a litter difficult.)

cotedm commented 7 years ago

@lakeofsand I believe this enhancement proposal is now obsolete given that we have the JDBC Sink Connector that can do this directly. Feel free to reopen if you are talking about something other than the thrift server for Spark

lakeofsand commented 7 years ago

It is not exactly same with "JDBC Sink connector". In "HDFS sink connector", we also need a hive metastore service for sync-with-hive when new partition's data come in.

It need support to sync-with-hive with spark thrift server,not hive metastore service.

cotedm commented 7 years ago

@lakeofsand the spark thrift server is akin to the hiveserver2 implementation and as such has no state to sync http://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbcodbc-server

I'm not sure what the current implementation is lacking, but if you can lay out an example then that would be helpful.

lakeofsand commented 7 years ago

Sorry for my poor explanation...

Let say in this way:

Now 'kafka-connect-hdfs' use class 'HiveMetastore' to do hive actions, for example add partitions when new data come in. It relys on 'org.apache.hadoop.hive.metastore.*',and need a hive metastore service in the cluster.

In our spark 1.6 cluster, there is no hive metastore service. We need deploy a new one just for 'kafka-connect-hdfs'. That's unworthy and heavily.

So we add a thin implement 'Hive2Thrift' just upon "java.sql.", it can do same thing,but only need include standard 'java.sql.', and a spark thrift server.

I am not a expect,but in our spark cluster,really unworthy to deploy a heavily hivemestore service.

cotedm commented 7 years ago

@lakeofsand so are you suggesting an architectural change here to remove the HiveMetastore dependency of the connector for those HDFS instances that have no Hive service associated with them? I'll reopen this but I think we need more details here because that's a pretty non-trivial change.

lakeofsand commented 7 years ago

Maybe no need an 'architectural change'. In our local implement ,we just extend a class named 'ThriftUtil' from HiveUtil(io.confluent.connect.hdfs.hive),like: public class ThriftUtil extends HiveUtil { ... @Override public void createTable(String database, String tableName, Schema schema, Partitioner partitioner) throws Hive2ThriftException { StringBuilder createDBDDL = new StringBuilder(); String createTableDDL;

    createDBDDL.append("CREATE DATABASE IF NOT EXISTS ").append(database);
    hive2Thrift.excute(createDBDDL.toString());

    createTableDDL = getCreateTableDDL(database,tableName, schema, partitioner,this.lifeCycle);
    log.debug("create table ddl {}",createTableDDL);
    hive2Thrift.excute(createTableDDL);
}
...

}

lakeofsand commented 7 years ago

But i can't find a appropriate way to override 'alterSchema()'