Note that most of the other heavy allocations here are addressed by #27 and #30.
Benchmarks show a modest speed up (-16% median time) and allocation reduction (-9.2% memory, -16.6% # allocations).
Main:
julia> @benchmark Chess.PGN.gamefromstring(pgn)
BenchmarkTools.Trial: 8865 samples with 1 evaluation.
Range (min … max): 522.171 μs … 3.842 ms ┊ GC (min … max): 0.00% … 85.66%
Time (median): 530.203 μs ┊ GC (median): 0.00%
Time (mean ± σ): 560.344 μs ± 255.895 μs ┊ GC (mean ± σ): 4.48% ± 8.11%
█▆▃▁ ▁
█████▇▆▅▅▅▆▆█▇▅▅▅▅▅▃▅▅▃▃▁▄▄▃▄▄▁▄▃▄▃▁▃▁▁▃▁▄▁▁▁▁▃▁▁▁▁▁▃▁▁▁▃▁▁▁▄ █
522 μs Histogram: log(frequency) by time 1.19 ms <
Memory estimate: 441.47 KiB, allocs estimate: 3036.
This PR:
julia> @benchmark Chess.PGN.gamefromstring(pgn)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 436.290 μs … 3.350 ms ┊ GC (min … max): 0.00% … 85.93%
Time (median): 445.344 μs ┊ GC (median): 0.00%
Time (mean ± σ): 473.730 μs ± 250.313 μs ┊ GC (mean ± σ): 4.96% ± 7.98%
█▂
▄██▄▆█▄▃▃▂▂▂▂▂▂▂▂▁▁▂▂▂▁▁▂▂▂▁▂▁▂▂▂▂▂▂▂▂▂▁▁▂▂▁▂▂▁▁▂▂▂▂▁▂▂▁▁▂▂▁▂ ▂
436 μs Histogram: frequency by time 658 μs <
Memory estimate: 401.62 KiB, allocs estimate: 2532.
Personal Note: I apologize for the influx of PRs. I have been working on a project using this library, and wanted to upstream the optimizations I found during it.
This reduces allocations in
pgn.jl
by preallocating anIOBuffer
forPGNReader
so that one isn't created each time we read a symbol.For a test, I used:
The top allocations are as follows, with the item marked
>
being reduced by 80% in this PR:Note that most of the other heavy allocations here are addressed by #27 and #30.
Benchmarks show a modest speed up (
-16%
median time) and allocation reduction (-9.2%
memory,-16.6%
# allocations).Main:
This PR:
Personal Note: I apologize for the influx of PRs. I have been working on a project using this library, and wanted to upstream the optimizations I found during it.