Open ljanda opened 5 years ago
I agree that the H2O presentation is incomplete. But there are various other analyses, and it is a generally known fact, that even RStudio concedes.
The issue that you are hand-waving and posting unclear tables and graphs still holds. The time differences are often negligible with the dataset sizes many people work with. Also, R has challenges with "big data" regardless of packages used (which is why people turn to parallel computing packages, spark with R, etc). It is irresponsible to cherry-pick, obscure information, and post numbers and graphs without the relevant details.
Does the comparison rely on the column to be in first position?
@pmarchand1 yes the first example above (that @matloff cited and I pulled the code from) does, in the code it is "col_1", link: https://github.com/WinVector/Examples/blob/master/dplyr/select_timing.Rmd
See my earlier comment.
Your post often states that dplyr is slower than data.table and uses many qualifiers eg "much, much" slower but you do not clearly define what you mean by much slower or give clear information about when it is slower. You link out to a couple examples but this information should clearly be given within the post as this is a large part of your argument.
You do pull some numbers from the h20ai site but your table is vague - it doesn't show what functions are being used or the size of the dataset or that it is in seconds and you don't consistently pull the smallest number
Would redo in this way: Here are a couple examples of functions applied to multiple groups within dataset using the largest dataset tested in the link provided above. All outcomes are in seconds:
Dataset used: 1,000,000,000 rows x 9 columns (50GB)
function 1:
function 2:
function 3:
Note: The smaller datasets tested (10,000,000 rows x 9 columns, 0.5 GB and 100,000,000 rows x 9 columns, 5GB) also saw better speed with data.table but the difference was often a few seconds.
I often use large-ish datasets (over a million rows, though often just a few million, with tens of variables) and have had very limited speed issues with dplyr. It is problematic to simply state that one should use data.table since it is faster without quantifying this claim. The data.table speed v. syntax tradeoff may not be worthwhile, especially for the many users who work with sizes where the speed difference is negligible. In some cases, data.table is an excellent option (I especially love fread()), but this post doesn't demonstrate the gains clearly or emphasize when it is useful.
Finally, the graph you include following the table is fairly inscrutable (but looks dramatic since it is a graph quickly sloping up), would recommend rerunning it and making it with the seconds, rather than the confusing ratio. I went ahead and grabbed the code from the paper you cited and edited it to make the suggested graph so here. I'd also like to point out that this example is a odd/niche since it is for selecting a variable out of an increasing number of variables, up to ~ 100,000 variables (always for 5 rows), and it is rare to work with such a large number of variables. The time for dplyr::select() only exceeds one second at 10,000 variables - something that would not matter in many cases.
If you adjust this example to increase the number of rows, instead of columns, and bump up max number of rows to over ten million, which is a much more real application, the speed gain from base R is particularly negligible, even at 10 million rows: