-
Ideally, where possible, we should try to convert the hive operations to ES queries - to minimize IO but also the work done on the Hive side.
-
# Overview
The following EEL usage patterns describe reading and writing data to and from various file formats and storage systems.
A typical use case is sourcing data from a **RDBMS** system li…
-
Issue reported by one of our customer (Ticket 21412):
```
new_flight % compute("cached_flight")
new_sample % sample_n(10) %>% compute("my_sample")
Error: org.apache.spark.sql.AnalysisException: …
-
Not sure if this should work:
```
sc
-
Hi,
`ft_sql_transformer` has the following prototype:
``` r
ft_sql_transformer
-
I have trouble accessing those "end" fields (e.g. AlignmentRecord.end, variant.end) with sparkSql because end is a reserved keyword there and it conflicts with the field names.
I was wondering: Is i…
-
Goal: ensure that Spark SQL can operate in a strict ANSI SQL mode.
SQL dialects appear in two places:
- one is the dialect of SQL that SparkSQL supports (the one we care most about)
- the second …
-
Is there support for the following HiveQL statement or similar without creating intermediate tables?
WITH q1 AS (SELECT key, value FROM src WHERE key = '5')
FROM q1 INSERT INTO s1 SELECT *;
-
- This equates to the _Parquet_ **org.apache.parquet.schema.OriginalType.MAP** nested type - see https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
- The following example demonstra…
-
5.0H latest daily and latest spark 1.0 jar.
sqlContext.sql("select distinct col_int, col_char20 from vwload_reg02_unload_tbl UNION DISTINCT select col_int, col_char20 from vwload_reg02_unload_tbl2")…