gagolews / stringi

Fast and portable character string processing in R (with the Unicode ICU)
https://stringi.gagolewski.com/
Other
304 stars 44 forks source link

Split string into pieces of fixed length and computing q-grams #471

Open hadley opened 2 years ago

hadley commented 2 years ago

Is there any existing function that does this?

x <- c("ab", "def", "g")
split_length(x, 1)
#> list(c("a", "b"), c("d", "e", "f"), "g")

split_length(x, 2)
#> list("ab", c("de", "f"), "g")

I feel like this has to be a simple application of an existing function, but I can't figure it out.

hadley commented 2 years ago

I guess I can get pretty close with stri_sub_all():

library(stringi)

str_split_length <- function(x, n = 1) {
  max_length <- max(stri_length(x))
  idx <- seq(1, max_length, by = n)

  stri_sub_all(x, cbind(idx, length = n))
}

x <- c("ab", "def", "g")
str_split_length(x, 1)
str_split_length(x, 2)

I'd just need to clean up the trailing "".

gagolews commented 2 years ago

Related idea (not yet implemented): #31

But yeah, the question is why would anyone need it? Computing q-grams maybe?

hadley commented 2 years ago

Hmmm, maybe that's a better framing? It's like str_split_ngram() where you provide the boundary (character, word, ...), q/n, and whether or not you want overlaps? Then it becomes a tool that could underly (e.g.) https://juliasilge.github.io/tidytext/reference/unnest_ngrams.html

I was thinking of it mostly as a complement to tidyr::separate(df, x, by = c(1, 5, 10) — that splits a string up into a fixed number of pieces that go into columns. What's the parallel if you don't know how many pieces there are, and hence want the result to end up in rows? My current motivation is filling a hole a in 2d matrix of functions; I'm not sure if this arises frequently in practice.

gagolews commented 2 years ago

There's also this:

stringi::stri_split_boundaries(c("ab", "def", "g"), type="character")
[[1]]
[1] "a" "b"

[[2]]
[1] "d" "e" "f"

[[3]]
[1] "g"

which extracts grapheme clusters

hadley commented 2 years ago

Yeah, I think I'll start with that, then paste together into n-grams. Not super efficient but easy to implement and then we can see if it's actually useful.

gagolews commented 2 years ago

I might implement both here (the overlapping and non-overlapping splits), but not today :)

mikmart commented 2 years ago

@gagolews Re: why would anyone need it? It's a pretty niche use, but I actually needed exactly this the other day.

I had some data collected from two sources that included MAC addresses as strings. However, one source included colon separators ("01:23:45:67:89:AB") and another had removed them ("0123456789AB"). I wanted to harmonize the format to include the separators, which then meant I needed to split the strings from the latter source into chunks of length 2.

There's a host of solutions for the length 1 vector case in this StackOverflow question. I ended up lapply()ing one of them (see below), but a purpose-built function would have been a great help (there was a decent amount of data, so looping in R was slow).

str_chunk <- function(x, n) {
  substring(x, seq(1, nchar(x), n), seq(n, nchar(x), n))
}

fix_mac <- function(x) {
  sapply(lapply(x, str_chunk, 2), paste, collapse = ":")
}

fix_mac(c("0123456789AB", "0123456789AB"))
#> [1] "01:23:45:67:89:AB" "01:23:45:67:89:AB"
gagolews commented 2 years ago

Yep, good point. Plus, I guess it'd be nice to have an options for handling chunks of different lengths (e.g., first 2 code points, then 3, then 1, etc.)

mikmart commented 2 years ago

Yeah that would be useful! e.g. a similar "reconstruction" case with UUIDs would be chunks of 8, 4, 4, 4, and 12.