Open asinghvi17 opened 1 week ago
It should be iterating out of order? You may be confusing bugs for intention.
But an alternative is to use CachedDiskArray first I guess
Iterating out of order is fine, but a readblock
on every iteration is probably pushing it :D
There might be some type instability as well, will have to profile with Cthulhu. But AccessCountDiskArray is showing the correct number of accesses via map
, so it should be fine....
Here's another example with collect
:
julia> da = AccessCountDiskArray(data, chunksize=(10,10))
200×100 AccessCountDiskArray{Int64, 2, Matrix{Int64}, DiskArrays.ChunkRead{DiskArrays.NoStepRange}}
Chunked: (
[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
[10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
)
julia> @be collect($data)
Benchmark: 4638 samples with 1 evaluation
min 2.542 μs (3 allocs: 156.328 KiB)
median 11.875 μs (3 allocs: 156.328 KiB)
mean 17.469 μs (3 allocs: 156.328 KiB, 1.25% gc time)
max 1.782 ms (3 allocs: 156.328 KiB, 99.11% gc time)
julia> @be collect($da)
Benchmark: 2720 samples with 1 evaluation
min 11.291 μs (15 allocs: 312.984 KiB)
median 17.188 μs (15 allocs: 312.984 KiB)
mean 30.395 μs (15.00 allocs: 313.003 KiB, 2.16% gc time)
max 1.020 ms (17 allocs: 345.016 KiB, 96.76% gc time)
julia> da.getindex_log
2871-element Vector{Any}:
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
⋮
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
(1:200, 1:100)
It looks like DiskGenerator is not looping over chunks at all, but rather is performing random access. Should we make it so that it loops over chunks? Perhaps by making it stateful, and letting it keep the current chunk "in memory"? Not sure what the best solution is here...but there must be something better than a 2 order of magnitude slowdown...