-
can i execute spark sql by jdbc driver?
And how to bulk insert data with low latency?
-
when I run
`curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m'`
it makes sparkContext with no problem but when I want to make a sparkSQL context I get an error I us…
-
Is there a way to set up the sparklyr spark context using a pre-existing `sc` from sparkR?
-
### What kind an issue is this?
- [x] Bug report. If you’ve found a bug, please provide a code snippet or test to reproduce it below.
The easier it is to track down the bug, the faster …
omidb updated
6 years ago
-
I tried creating dataframe out of a 60Mb excel file using spark-hadoopoffice-ds-2.11.
But it throws java.lang.OutOfMemoryError: GC overhead limit exceeded.
spark.executor.instances was set to 3.
…
-
Hi, I am using hadoopoffice in my spark application. I have followed your steps and added the dependency in my pom.xml file.
com.github.zuinnote
spark-hadoopoffice-ds_2.11
1.0.4
when i r…
-
Hi,
I am attempting to use this package via: https://scalapb.github.io/sparksql.html. I have tried both the `trueaccord` and `thesamet` artifacts and I always end up with an empty jar with only `ME…
-
错误如下,该如何解决呢?谢谢
`Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.aliyun.maxcompute.datasource. Please find packages at http://spark.apache.org…
-
Hi, I'm a bit confused, why using this lib may cause such error, as by my understanding Tez is an alternative to spark execution engine, so as this is spark-llap lib, it should not be used at all.
…
-
Spark SQL [attempts to infer](https://spark.apache.org/docs/1.1.0/sql-programming-guide.html#inferring-the-schema-using-reflection) the schema of your data using reflection. This works for case classe…
danvk updated
7 years ago