instead of the iterative version of .decimate.core used here, why not just sequence along the edge? this approach is faster in my benchmarks for small levels of precision (it's more or less stable w.r.t. precision, whereas .decimate.core had marked increases in computation time for precision <.1. further, it is very consistent in speed across precision levels.
it is slower than the .decimate.core version for moderately large levels of precision (relative to the initial distance), which is why I wanted to ping you before including it.
here's what I came up with this morning:
.decimate.core<-function(x,pr) {
if (.lengths(x)<pr)x
else{
nn<-ceiling(.lengths(x)/pr)
cbind(seq(x[1,1],x[2,1],by=(x[2,1]-x[1,1])/nn),
seq(x[1,2],x[2,2],by=(x[2,2]-x[1,2])/nn))
}
}
this may be able to be sped up. the idea is to divide the line between the pair of points into just enough equal-length segments to make the proximal distances below pr.
noticed you declared
nr
but then didn't use it.also, another suggestion (not included here):
instead of the iterative version of
.decimate.core
used here, why not just sequence along the edge? this approach is faster in my benchmarks for small levels of precision (it's more or less stable w.r.t. precision, whereas.decimate.core
had marked increases in computation time for precision <.1. further, it is very consistent in speed across precision levels.it is slower than the
.decimate.core
version for moderately large levels of precision (relative to the initial distance), which is why I wanted to ping you before including it.here's what I came up with this morning:
this may be able to be sped up. the idea is to divide the line between the pair of points into just enough equal-length segments to make the proximal distances below
pr
.let me know what you think