Closed non-Jedi closed 5 years ago
Looking over the cases in the benchmark folder that I hadn't spotted before, not all of those are doing the same thing, and because nothing is being done with the output, both test_resumable
and test_closure_stm
are getting completely optimized away (maybe others as well; didn't look that closely).
julia> @code_native test_resumable()
.text
; ┌ @ benchmarks.jl:31 within `test_resumable'
movq %rsi, -8(%rsp)
; │ @ benchmarks.jl:33 within `test_resumable'
movabsq $140230517973000, %rax # imm = 0x7F89F635E008
retq
; └
julia> @code_native test_closure_stm()
.text
; ┌ @ benchmarks.jl:128 within `test_closure_stm'
movq %rsi, -8(%rsp)
; │ @ benchmarks.jl:130 within `test_closure_stm'
movabsq $140230517973000, %rax # imm = 0x7F89F635E008
retq
; └
I didn't trust the assertion in the README about the relative performance of resumable functions and Tasks/Channels, so I ran some simple benchmarks. I thought they might be a good idea to include in the README or documentation:
In general, ResumableFunctions.jl seems to be about 2 times faster than doing the equivalent using Channels. Interestingly, if I change Channel
csize
to 1 instead of 0, the discrepancy is closer to 4-5 times than 2. And increasingcsize
to 10 adds several orders of magnitude to the runtime.Also, obviously, neither holds a candle to just using the vanilla iteration interface with a custom type. Both are 3 orders of magnitude slower than that option.
I'm curious. Has anyone looked into why Channels are so slow as iterators and whether they can be improved to be at least on par with ResumableFunctions?