-
```
library(sparklyr)
library(dplyr)
spark_disconnect(sc)
setwd(paste0(Sys.getenv("SPARK_HOME"),"/"))
sc % stream_write_text(path = "file3")
```
Running the above code I want to consume a kafka…
DSoot updated
3 years ago
-
sparklyr creates a folder named "spark-warehouse" in working directory during read process of CSV files, for instance. Is there any way to automatically delete spark-warehouse folder after reading pro…
-
We have a function inside our customer library in native R, which basically check & combines & merge couple of native data.table together. We tried to execute this function in Spark cluster with `spar…
-
`spark_read_table(sc,"db_name.table_name") `throws an AnalysisException (below)
`tbl_change_db(sc,"db_name"); spark_read_table(sc,"table_name")` succeeds.
The table was written with `spark_write…
-
Hi do you have instructions for installing Sedona for R use on AWS EMR?
-
Have a bare-metal `k8s` cluster in a remote server. Tried to use `spark_connect` connecting to the cluster but in vain.
The `spark-r:sparklyr` image was successfully built and pushed to a private r…
bmkor updated
5 years ago
-
ONNX is a general model format which supports most of the deep learning frameworks including Pytorch, Mxnet, CNTK, Keras and so on. Wish `Sparklyr` can support inference deep learning model more user-…
-
Hello,
I often use sparklyr and spark_apply() within the Databricks Platform on AWS. I'm attempting to move some code from Databricks Runtime 4.1 (with sparklyr 0.8.4 installed manually), to Databr…
-
I was trying to use spark_apply function on a spark dataframe in sparklyr. The spark instance I am using is of version 1.6.0. When I use the same function in my local machine, it executes absolutely f…
-
Is there a plan in the roadmap to include GraphFrame API in SparkR as well?