-
see testcase in issue: https://mantis-vw.actian.com/view.php?id=8413 [^]
INTERSECT:
scala> sqlContext.sql("select col_int, col_char20, col_timestamp from vwload_reg02_unload_tbl INTERSECT select col…
-
SparkSQL's non-Hive SQL parser is a little janky. They promise better support for HiveQL. We should probably support that. Presumably that would help us interact with Hive as well.
-
"In addition to the basic SQLContext, you can also create a HiveContext, which provides a superset of the functionality provided by the basic SQLContext. Additional features include the ability to wri…
-
some are mapped but most are not. Pass through will work only if user guesses name and function not used as win function. See https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark…
-
Hi,
I "flattened" an Adam file - storing chromosome 1 of HG 1000genome. Then I wanted to start an SQL-query on this data:
That's what I tried ....
scala> val sqlRDD2 = sqlContext.parquetFile("hdfs:/…
-
'es.resource' = 'apache-2014.09./apache-access' or
'es.resource' = 'apache-2014.09.29,apache-2014.09.30/apache-access'
are not working well for 'select count(*) from test' which is HiveQL.
The count …
-
I would like to implement **UDAF** (user-define aggregation function) like `sum` in `SparkSQL`, and how to do it ?
-
```
What steps will reproduce the problem?
1. Set up master/slave replication from MySQL to Hadoop.
2. Create a table that includes decimal data but with no precision specified.
Here's an example:…
-
```
What steps will reproduce the problem?
1. Set up master/slave replication from MySQL to Hadoop.
2. Create a table that includes decimal data but with no precision specified.
Here's an example:…
-
```
What steps will reproduce the problem?
1. Set up master/slave replication from MySQL to Hadoop.
2. Create a table that includes decimal data but with no precision specified.
Here's an example:…