Closed charleskawczynski closed 1 year ago
I can repro this, but it isn't an issue with ProfileCanvas:
julia> @show p_allocated # 144 bytes
p_allocated = 144
144
julia> Profile.Allocs.clear()
julia> Profile.Allocs.@profile sample_rate = 1 workload(x)
julia> results = Profile.Allocs.fetch()
Profile.Allocs.AllocResults(Profile.Allocs.Alloc[Profile.Allocs.Alloc(Vector{Any}, Base.StackTraces.StackFrame[maybe_record_alloc_to_profile at gc-alloc-profiler.h:42 [inlined], ...], 40), Profile.Allocs.Alloc(Profile.Allocs.BufferType, Base.StackTraces.StackFrame[maybe_record_alloc_to_profile at gc-alloc-profiler.h:42 [inlined], ...], 64), Profile.Allocs.Alloc(Profile.Allocs.UnknownType, Base.StackTraces.StackFrame[maybe_record_alloc_to_profile at gc-alloc-profiler.h:42 [inlined], ...], 16)])
julia> sum(a -> a.size, results.allocs)
120
Please open an issue against base :)
I'm not sure if this is an issue with
Profile.Allocs
, ProfileCanvas.jl, or my interpretation/understanding, but:Switching over to
size
in this flame graph shows 3 blocksVector{Any}
(40 bytes)BufferType
(64 bytes)UnknownType
(16 bytes) totalling 120 bytes, which does not match what@allocated
reports. Is my understanding of the flame graph wrong, or is this a bug?