Closed martinfleis closed 4 months ago
I'd say this is an "enhancement" due to the performance improvement.
Thinking about this more, I think that we shall use the same logic in weights
and neighbors
properties that are unbearably slow for large graphs. Hard to catch these things when developing on small graphs...
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 85.0%. Comparing base (
018f1e2
) to head (bcabdbc
). Report is 4 commits behind head on main.
i got a question about the performance of the weights builders in the workshop yesterday, and i had to proudly scoff that they're the fastest you can find... 'Martin uses these routinely on datasets in the millions' :D
Do we know what's causing the failures on macOS & ubuntu-dev?
the mac test looks like its probably a fluke. the ubuntu stuff is all over the place
Builders are fast but some compatibility bits seem to be worse :). I'm noticing it only now when I regularly use Graph on large data. There's a reason it is still experimental :)
This looks fine to me! I think we may want to consider some benchmarks with asv for these kinds of things... I think that construction, serialization, conversion, lag, and standardization are the big targets?
Looks like all failures will be fixed by #692. Going ahead with that merge now and see if we can get this green.
All green (for now --> the macOS failure just before had to do with a connection issue for geodatasets
).
Not sure if we wanted to see about implementing @ljwolf's idea for asv
benchmarking here, or that's something from the future. Thinking probably it's future, but I'll wait on merging until confirmed.
Not going to do asv
here. I'm generally a bit skeptic about it given we have it in geopandas and no one is running it or anything.
Closes #672
For adjacency of 1,404,080 edges, the conversion time is 42.7s on main and 4.5s in this PR.
@jGaboardi not sure how to properly label this...