Closed ysgit closed 10 months ago
Thanks for the encouragement. Fully agree. Yes there are lots of efforts in this space. Note that I (and now Jan Gorecki) work at H2O. I gave a recent presentation here: https://www.youtube.com/watch?v=5X7h1rZGVs0 The slides are here : https://github.com/Rdatatable/data.table/wiki/Presentations Just to check whether you had seen these before? Before discussing further.
Hi @mattdowle thanks for pointing those out. Very impressive results! So would you say that H2O already enables that pipeline?
A lot depends on details of your data and types of feature engineering. Definitely test H2O's ML as well as MLlib. We don't have data.table syntax for either Spark or H2O yet. So short answer is no because of that. But yes we'd like to add it.
Fully agree, this would be great. Hope that data.table will be made available in the near future.
I really hope DT syntax is available someday. Personally, I really prefer the DT syntax and would like to use it consistently in Spark rather than cringe with dplyr. Fingers crossed...
something new?
Adding my voice to a desire to be able to use data.table instead of dplyr on distributed data in Spark/H2O
Using data.table in Spark would be AMAZING!
I ran into an issue caused by SparkR's using rJava (and all its endless flaws) today, would love to have been able to just use data.table. directly instead of the magical file i/o gymnastics I found myself resorting to
On Dec 1, 2017 9:56 PM, "petalatuy" notifications@github.com wrote:
Using data.table in Spark would be AMAZING!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Rdatatable/data.table/issues/1828#issuecomment-348500844, or mute the thread https://github.com/notifications/unsubscribe-auth/AHQQdXbs22uCw1Zxyg2zWqkUXGWjzuifks5s8AWqgaJpZM4JuUyv .
My plus 1000
+1 any updates on the future plan of data.table package?
I'm curious what ppl want out of this exactly.
Just to be able to use [ on an RDD like you would on a data.table (namely, i/j/by)?
certainly the full functionality is a ways away, but i imagine it wouldn't be too earth shaking to make an idiom for filtering, grouping, even joining by sending syntax within [] to the corresponding operations in sparkR.
in particular, this would just amount to (in essence) aliasing sparkR functions in a syntax friendlier for data.table regulars.
is this what people have in mind?
No updates, there is a lot to dev around data.table itself, so external interfacing is not that high priority now. Instead of just spark integration it makes more sense to integrate to dplyr
, something like dplyr.table
(inverse of dtplyr
). Then any dplyr backend will work on data.table syntax.
indeed, dplyr SQL interface is good enough.
@jangorecki The problem with that is that will also be slow/full of bugs/ and very unstable as dplyr interface changes on almost daily basis and sometimes they change the whole idiom at once like they did with lazyeval > rlang > tidyeval and G-d knows else what as I long time lost track. Not to mention that hadley, who once stated (with a "tongue in cheeck") that data.table uses "cryptic shortcuts" now masks few of these shortcuts and suddenly don't consider them so cryptic anymore. In short, creating such an API would be a full time job IMO.
I think migrating a few main functionalities from data.table and keep adding them when there is time would be much safer/easier.
@DavidArenburg agree, thus I would suggest to wait at least till dplyr 1.0 before starting any serious dev of such dplyr.table
interface.
Have you arrived at a conceivable roadmap for a spark integration project (reverse dtplyr or any other form) given that dplyr 1.0 has been released? It would be great to hear your thoughts now that some time has passed.
@jangorecki if the dplyr.table
approach is key, or any backend for that matter, it seems like higher integration with data.table
would be necessary. Take i
which can be:
nrow(dt)
with potential for notjoin
NSEnotjoin
NSEWhile working on #4585, I also worked on functions to process the isub to their end points but did not implement in the PR because with all the variables needed to process the isub, it was not clean. However, to implement a backend, a function to process the isub would be useful so that NSE would be processed consistently. Otherwise, it would be very easy for dplyr.table
i processing to become out of sync of data.table
processing.
That make sense but for that we have to make multiple new helpers to process internal logic of understanding input arguments, and then export them so such tool can easily mimic this logic. Describing our current API with use of helpers is not that trivial task. See related #852
@ jangorecki
I sincerely hope data.table
will become a viable solution on itself and help to completely avoid the dplyr
/ tidyverse
fluff when (and not only when!) interacting with large datasets and out-of-memory computing. So far I find data.table
a marvel of clarity and efficiency, almost perfect and excellent integration with mlr3verse
, to give an example.
Proposed dplyr.table
does not need to interact with data.table at all and can be completely standalone package.
All it needs is to mimic data.table's API:
DT[ subset|order, select, groupby ]
That makes it much easier to deliver rather than trying to fit translation inside [.data.table
.
Then possible usage could look like:
library(data.table)
dt = data.table(a=1:4, b=1:2)
library(dplyr.table)
dp = as.dplyr.table(dt)
all.equal(
dt[, sum(a), b],
dp[, sum(a), b] |> as.data.table
)
The latter one dp
could be theoretically any dplyr backend, spark, duckdb (personally I like duckdb a lot but lack of user friendly API is still pushing me away) and so on.
I might be missing something (sorry: long, old thread with several hidden replies). But since we're ultimately talking syntax masking/mimicking, wouldn't it be easier in the long-run to create something like a database.table
package that translates the [i, j, by]
into the appropriate backend(s)? Or, perhaps easier for conversion, going through the new DT(i, j, by)
functional syntax that Matt introduced not so long ago.
Having a dedicated database.table
frontend package that directly controls the syntax generics is probably more aligned with the data.table way than going through d(b)plyr in the end.
(This be might Jan's point, so again apologies if I'm just quibbling over the name.)
+1 on DuckDB, although I do think their SQL API is much better than the alternatives.
Integration of various sources/targets is a lot of dev and maintenance, therefore doing single integration to dplyr, and via it having another backends feels like much more likely to be achieved. If we want to target only spark, or only duckdb, then I agree it's better to translate directly rather than via dplyr.
An off topic: DT() has been pulled back for the moment from exported API.
I agree that it would be possible and preferable to implement this in a separate package, which hopefully would get the seal of approval #5723
also based on the new governance this is out of scope -- "Functionality that is out of current scope...Manipulating out-of-memory data, e.g. data stored on disk or remote SQL DB, (as opposed e.g. to sqldf / dbplyr)" and consensus seems to be that this should be implemented in another package, so I am closing this issue. (feel free to re-open if I have mis-understood)
Bump for the "dplyr.table
(inverse of dtplyr)". For me, that would be a game changer, letting me use arrow with the DT syntax. I naively added a feature request for DT syntax on the arrow github page
data.table is awesome but most people don't have 100GB memory in order to handle really large data sets in memory.
Big progress has been made making the Apache Spark framework available through R in the last couple of years. Two such projects are Apache's sparkr and Rstudio's sparklyr. Both of these provide a dplyr style interface to spark's data processing engine.
As a heavy data.table user it would be amazing if there were to be a data.table interface for spark. That would make it incredibly easy for data scientists to migrate their projects from the smaller CSV style data sets to the huge data sets that can be processed by spark.
A classic data pipeline for me is
I want to be able to migrate this to