Closed lokeshpkumar closed 9 years ago
Hi @lokeshpkumar ,
Sorry, the spark1.5-alpha branch is not ready for use now. We will notify you when it is. And we suggest to use our syntax to create the table, such as:
hbaseCtx.sql( """CREATE TABLE ips12 (column2 INTEGER, column1 INTEGER, column4 FLOAT, column3 SHORT, PRIMARY KEY(column1)) MAPPED BY (hbase_table, COLS=[column2=family0.qualifier0, column3=family1.qualifier1, column4=family2.qualifier2])"""
@lokeshpkumar
BTW, have you created that logic table "ips12" before? It is not allowed to create two logic tables in the same name.
Hi @lokeshpkumar
Now, the "spark1.5-alpha" branch should be compatible with spark1.5. Our team is doing the further testing and you are welcome to have a try. :)
Hi @xinyunh
Thanks for the response, I got the problem that we should use map the logical table name to actual Hbase table only once and use it (correct me if I am wrong) And also we will be using spark 1.5 so we will use your 1.5 branch, I will let you know in case of anymore issues.
Thanks, @lokeshpkumar
Hi
We are using spark-1.5-alpha branch to test the spark-sql-on-hbase with hbase. Everytime we run the program after the first successful execution we get the below error.
Exception in thread "main" java.lang.Exception: The logical table: ips12 already exists at org.apache.spark.sql.hbase.HBaseCatalog.createTable(HBaseCatalog.scala:178) at org.apache.spark.sql.hbase.HBaseSource.createRelation(HBaseRelation.scala:79) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114) at Main.main(Main.java:47)
What does this mean? Do we have to provide a new tableName alias everytime we execute our program?
The program is given below, any advice is much appreciated.
public static void main(String[] args) throws JobExecutionException { JavaSparkContext jsc = new JavaSparkContext("local[2]", "HbaseTest"); HBaseSQLContext hbaseCtx = new HBaseSQLContext(jsc); Map<String, String> options = new HashMap<>(); options.put("namespace", ""); options.put("tableName", "ips12"); options.put("hbaseTableName", "hbase_table"); options.put("colsSeq", "id,Name,port,longitude,latitude,devicetype,community,performance,fault,status,ip,location,Sample"); options.put("keyCols", "id,string"); options.put("encodingFormat", "utf-8"); options.put("nonKeyCols", "Name,string,dcf,Name;port,string,dcf,port;longitude,string,dcf,longitude;latitude,string,dcf,latitude;devicetype,string,dcf,deviceType;community,string,dcf,community;performance,string,dcf,performance;fault,string,dcf,fault;status,string,dcf,status;ip,string,dcf,ip;location,string,dcf,location;Sample,string,dcf,Sample"); hbaseCtx.read().format("org.apache.spark.sql.hbase.HBaseSource").options(options).load(); hbaseCtx.sql("select * from ips12").orderBy(new Column("Name").desc()).show(); }