Closed JosiahParry closed 9 months ago
Finally got time at a computer today. Will check it out (with gh)
gh pr checkout 26
As an aside, that I'm strongly opposed to shipping software that imports dplyr due to the frequency of breaking changes as well as the performance of it. We can address later once we've figured it out.
Fair, I've been burned more than once by this. I think {dplyr}
is finally stable (although you never know), but my main reason for not wanting to import it is its size and number of additional deps.
Results from example I'm looking at, looking good:
library(sf)
x = read_sf("data-raw/geojson/princes_street_minimal_x_1.geojson") |>
sf::st_transform(27700)
y = read_sf("data-raw/geojson/princes_street_minimal.geojson") |>
sf::st_transform(27700)
matches = rnetmatch::rnet_match(x, y, dist_tolerance = 10, angle_tolerance = 5)
y_aggregated = rnet_aggregate(x, y, matches, value)
y_aggregated$id = y_aggregated$i
y_joined = dplyr::left_join(x, y_aggregated)
plot(y["value"], lwd = 5)
plot(y_joined["value"], lwd = 5)
Original y
And y_joined
:
All on the same map:
Only suggestion would be: remove the confusing units.
@Robinlovelace, do you think non-standard eval is the way to go here via ...
? I think it feels pretty okay. The alternative would be a to provide a vector of column names. I'm open to both
NSE is fine for now, in going fast and breaking stuff mode. Will merge.
This PR begins to address #22. It creates a user interface to doing weighted aggregation of numeric variables. It uses dplyr which I am comfortable with this for development.
Please review the draft. This performs a shared-length weighted aggregation of variables in y. It does not join the results back onto
x
. Doing so would require copying the data which can be expensive and we don't know how bigx
might be.As an aside, that I'm strongly opposed to shipping software that imports dplyr due to the frequency of breaking changes as well as the performance of it. We can address later once we've figured it out.