Closed rafaqz closed 2 years ago
Oh just saw the test failures are real - planargradient doesn't give the right result
Actually I broke _rescale
which is much more serious! It's tested now.
Merging #59 (5e1b536) into main (301605b) will decrease coverage by
0.13%
. The diff coverage is94.59%
.
@@ Coverage Diff @@
## main #59 +/- ##
==========================================
- Coverage 84.81% 84.68% -0.14%
==========================================
Files 14 14
Lines 303 320 +17
==========================================
+ Hits 257 271 +14
- Misses 46 49 +3
Flag | Coverage Δ | |
---|---|---|
unittests | 84.68% <94.59%> (-0.14%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
src/NeutralLandscapes.jl | 100.00% <ø> (ø) |
|
src/classify.jl | 46.57% <ø> (ø) |
|
src/algorithms/nnelement.jl | 83.33% <75.00%> (-11.91%) |
:arrow_down: |
src/algorithms/diamondsquare.jl | 100.00% <100.00%> (ø) |
|
src/algorithms/distancegradient.jl | 100.00% <100.00%> (ø) |
|
src/algorithms/nncluster.jl | 100.00% <100.00%> (ø) |
|
src/algorithms/perlinnoise.jl | 91.66% <100.00%> (-0.40%) |
:arrow_down: |
src/algorithms/planargradient.jl | 100.00% <100.00%> (ø) |
|
src/algorithms/rectangularcluster.jl | 100.00% <100.00%> (ø) |
|
src/landscape.jl | 93.33% <100.00%> (ø) |
|
... and 1 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 301605b...5e1b536. Read the comment docs.
Looking good!
Thanks! Is that after my last tweak to DistanceGradient?
Looks like its caught up to python now
Think we can merge this and possibly consider adding approximate NNE in a different pr
The gains look almost imperceptible when comparing the figure to the one posted in zulip
Log scale?? Also thats only half the methods. The biggest gains are not in there either - RectangularClusters is 10x, DiamomdSquare and PlanarGradient also a few times faster. 2 of these use NN and one is just rand.
I'll copy my btime here if that helps.
Benchmarks. 1.5-12x faster, except NearestNeighborCluster
, which we know, and PerlinNoise
, which I already optimized last time. Mostly 2-3x faster, which is mot much on a log scale.
# main
julia> @btime rand(DiamondSquare(), 100, 100);
1.473 ms (38236 allocations: 4.61 MiB)
# performance branch
julia> @btime rand(DiamondSquare(), 100, 100);
469.940 μs (7 allocations: 286.47 KiB)
# main
julia> @btime rand(DiscreteVoronoi(), 100, 100);
785.787 μs (20047 allocations: 1.91 MiB)
# performance branch
julia> @btime rand(DiscreteVoronoi(), 100, 100);
278.867 μs (17 allocations: 79.42 KiB)
# main
julia> @btime rand(DistanceGradient(), 100, 100);
784.722 μs (20050 allocations: 1.91 MiB)
# performance branch
julia> @btime rand(DistanceGradient(), 100, 100);
229.702 μs (22 allocations: 79.30 KiB)
# main
julia> @btime rand(DistanceGradient([1, 2]), 100, 100);
841.513 μs (20051 allocations: 1.91 MiB)
# performance branch
julia> @btime rand(DistanceGradient([1, 2]), 100, 100);
266.373 μs (24 allocations: 79.44 KiB)
# main
julia> @btime rand(EdgeGradient(), 100, 100);
87.527 μs (8 allocations: 234.55 KiB)
# performance branch
julia> @btime rand(EdgeGradient(), 100, 100);
45.450 μs (9 allocations: 235.42 KiB)
# main
julia> @btime rand(MidpointDisplacement(), 100, 100);
1.545 ms (38236 allocations: 4.28 MiB)
# performance branch
julia> @btime rand(MidpointDisplacement(), 100, 100);
419.279 μs (7 allocations: 286.47 KiB)
# main
julia> @btime rand(NearestNeighborElement(; n=5, k=2), 100, 100);
997.695 μs (20044 allocations: 2.14 MiB)
# performance branch
julia> @btime rand(NearestNeighborElement(; n=5, k=2), 100, 100);
460.346 μs (17 allocations: 79.56 KiB)
# main
julia> @btime rand(NearestNeighborCluster(; n=:queen), 100, 100);
4.768 ms (39099 allocations: 4.08 MiB)
# performance branch
julia> @btime rand(NearestNeighborCluster(; n=:queen), 100, 100);
3.993 ms (19060 allocations: 1.86 MiB)
# main
julia> @btime rand(NearestNeighborCluster(; n=:diagonal), 100, 100);
4.766 ms (39099 allocations: 4.08 MiB)
# performance branch
julia> @btime rand(NearestNeighborCluster(; n=:diagonal), 100, 100);
4.035 ms (19060 allocations: 1.86 MiB)
# main
julia> @btime rand(NearestNeighborCluster(0.2, :diagonal), 100, 100);
3.429 ms (27088 allocations: 3.09 MiB)
# performance branch
julia> @btime rand(NearestNeighborCluster(0.2, :diagonal), 100, 100);
3.032 ms (7049 allocations: 942.03 KiB)
# main
julia> @btime rand(NoGradient(), 100, 100);
60.282 μs (2 allocations: 78.17 KiB)
# performance branch
julia> @btime rand(NoGradient(), 100, 100);
34.735 μs (2 allocations: 78.17 KiB)
# main
julia> @btime rand(PerlinNoise(), 100, 100);
491.376 μs (107 allocations: 1.23 MiB)
# performance branch
julia> @btime rand(PerlinNoise(), 100, 100);
387.226 μs (109 allocations: 1.23 MiB)
# main
julia> @btime rand(PerlinNoise(; octaves=3, lacunarity=3, persistance=0.3), 100, 100);
1.601 ms (290 allocations: 1.98 MiB)
# performance branch
julia> @btime rand(PerlinNoise(; octaves=3, lacunarity=3, persistance=0.3), 100, 100);
1.259 ms (292 allocations: 1.97 MiB)
# main
julia> @btime rand(PlanarGradient(), 100, 100);
76.178 μs (3 allocations: 78.19 KiB)
# performance branch
julia> @btime rand(PlanarGradient(), 100, 100);
32.888 μs (4 allocations: 79.06 KiB)
# main
julia> @btime rand(RectangularCluster(), 100, 100);
90.598 ms (140442 allocations: 5.22 MiB)
# performance branch
julia> @btime rand(RectangularCluster(), 100, 100);
7.015 ms (137012 allocations: 5.09 MiB)
# main
julia> @btime rand(WaveSurface(), 100, 100);
189.932 μs (3 allocations: 78.19 KiB)
# performance branch
julia> @btime rand(WaveSurface(), 100, 100);
125.732 μs (4 allocations: 79.06 KiB)
# main
julia> @btime rand(WaveSurface(; periods=2), 100, 100);
192.770 μs (3 allocations: 78.19 KiB)
# performance branch
julia> @btime rand(WaveSurface(; periods=2), 100, 100);
127.976 μs (4 allocations: 79.06 KiB)
Yeah amazing with the speed improvements :-) I guess this is ready to merge? My comment was meant in the context of the paper, where I had hoped the improvements might change the relationships qualitatively.
@rafaqz should be ready to merge, no?
there's a weird doc issue - but agreed, given the performance improvements, we should merge asap!
Ok that the docs issue is fixed, this is ready to go.
This PR is a number of performance optimizations: removing allocations, type stability, and some better choice of method for various things.
There are probably more performance improvements possible:
label
.k=1
. It doesn't help much once n and k are large. It may not be the best algorithm to use - but this PR isn't really for algorithmic changes.