Open hope-data-science opened 1 year ago
Hi @hope-data-science, thanks for sharing your ideas!
And yes, you are very correct, the ability to apply a row-filter (and an ordering) during file reads is definitely the next step needed for fst.
If we have filtering and ordering we could do:
library(dplyr)
library(fst)
nr_of_rows <- 1000
# generate sample fst file
tibble(
A = 1:nr_of_rows,
B = LETTERS[sample(1:26, nr_of_rows, replace = TRUE)]
) |>
write_fst("x.fst")
# filter and order during read
fst("x.fst") |>
filter(A %% 2 == 0) |>
arrange(B)
In the last steps, the following would happen:
In future implementation, this mechanism could be extended with an on-disk sorting algorithm to allow sorting of larger-than-memory datasets and keep the data on disk as long as possible. Only when collect()
is called, the data would actually be fully read.
To make this happen it's probably best to add a full dplyr
implementation to the fst package first (so that the code above will actually run) and then implement the required back-end code in fstlib
.
The
from
andto
parameters inread_fst
is great. I wonder if we can use a filter to select only the rows we needed. If this could be done on disk, it would be wonderful and make fst become a light and convenient database system, which would be great. I think this feature could boost fst to a new level. Any ideas to promote it? (I don't know the underlying tools, so I have no idea whether it would be easy to implemented or not. I just imagine that if we can just use the filter columns to retrieve the needed row ids and extract them usingfrom
andto
might be efficient)Thanks.