Open blairj09 opened 11 months ago
This will require an entire new DBI
back-end for pysparklyr
objects, not something I'd like to start this close to release time.
This request is coming up for users at a customer. Since so many of them are used to using dbWriteTable it would really help them onboard into using clusters from Workbench.
In the meantime I suggested they do something like:
random_df <- tibble::tibble("A" = rep(1,5,1), "B" = rep(1,5,1))
spark_tbl_random_df <- copy_to(sc, random_df, "spark_random_df")
spark_tbl_random_df %>%
spark_write_table(
name = I("demo.default.random_df"),
mode = "overwrite"
)
Currently when trying to write local data to Databricks, the following is observed: