meggart / DiskArrays.jl

Other
79 stars 16 forks source link

`map` on a diskarray is very very slow, compared to a regular array #199

Open asinghvi17 opened 1 week ago

asinghvi17 commented 1 week ago

julia> data = [i+j for i in 1:200, j in 1:100]
200×100 Matrix{Int64}:
[...]

julia> da = ChunkedDiskArray(data, chunksize=(10,10))
200×100 ChunkedDiskArray{Int64, 2, Matrix{Int64}}

Chunked: (
    [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
    [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
)

julia> using Chairmarks

julia> @be map($(x -> x * 5.0), $data) seconds=1
Benchmark: 33264 samples with 2 evaluations
 min    2.313 μs (3 allocs: 156.328 KiB)
 median 11.354 μs (3 allocs: 156.328 KiB)
 mean   13.788 μs (3 allocs: 156.328 KiB, 1.47% gc time)
 max    8.403 ms (3 allocs: 156.328 KiB, 99.68% gc time)

julia> @be map($(x -> x * 5.0), $da) seconds=1
Benchmark: 2595 samples with 1 evaluation
 min    278.083 μs (22409 allocs: 2.695 MiB)
 median 292.666 μs (22409 allocs: 2.695 MiB)
 mean   361.557 μs (22409 allocs: 2.695 MiB, 6.40% gc time)
 max    17.373 ms (22409 allocs: 2.695 MiB, 97.65% gc time)

julia> da = UnchunkedDiskArray(data)
200×100 UnchunkedDiskArray{Int64, 2, Matrix{Int64}}

Unchunked

julia> @be map($(x -> x * 5.0), $da) seconds=1
Benchmark: 3189 samples with 1 evaluation
 min    234.916 μs (20039 allocs: 2.596 MiB)
 median 248.333 μs (20039 allocs: 2.596 MiB)
 mean   292.711 μs (20039 allocs: 2.596 MiB, 6.61% gc time)
 max    1.075 ms (20039 allocs: 2.596 MiB, 73.20% gc time)

It looks like DiskGenerator is not looping over chunks at all, but rather is performing random access. Should we make it so that it loops over chunks? Perhaps by making it stateful, and letting it keep the current chunk "in memory"? Not sure what the best solution is here...but there must be something better than a 2 order of magnitude slowdown...

rafaqz commented 1 week ago

It should be iterating out of order? You may be confusing bugs for intention.

But an alternative is to use CachedDiskArray first I guess

asinghvi17 commented 1 week ago

Iterating out of order is fine, but a readblock on every iteration is probably pushing it :D

asinghvi17 commented 1 week ago

There might be some type instability as well, will have to profile with Cthulhu. But AccessCountDiskArray is showing the correct number of accesses via map, so it should be fine....

asinghvi17 commented 1 week ago

Here's another example with collect:


julia> da = AccessCountDiskArray(data, chunksize=(10,10))
200×100 AccessCountDiskArray{Int64, 2, Matrix{Int64}, DiskArrays.ChunkRead{DiskArrays.NoStepRange}}

Chunked: (
    [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
    [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
)

julia> @be collect($data)
Benchmark: 4638 samples with 1 evaluation
 min    2.542 μs (3 allocs: 156.328 KiB)
 median 11.875 μs (3 allocs: 156.328 KiB)
 mean   17.469 μs (3 allocs: 156.328 KiB, 1.25% gc time)
 max    1.782 ms (3 allocs: 156.328 KiB, 99.11% gc time)

julia> @be collect($da)
Benchmark: 2720 samples with 1 evaluation
 min    11.291 μs (15 allocs: 312.984 KiB)
 median 17.188 μs (15 allocs: 312.984 KiB)
 mean   30.395 μs (15.00 allocs: 313.003 KiB, 2.16% gc time)
 max    1.020 ms (17 allocs: 345.016 KiB, 96.76% gc time)

julia> da.getindex_log
2871-element Vector{Any}:
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 ⋮
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)
 (1:200, 1:100)