Closed aminya closed 4 years ago
I used @timev
, and clearly shows the improvements 🚀 . People thought that the only improvement that we get is in the inference time.
For example, for Zygote:
Running @timev
in running the tests shows that we get:
https://github.com/aminya/Zygote.jl/runs/646094164?check_suite_focus=true#step:6:123
If this is the case, should I just use @timev
for benchmarking isntead of @snoopi
? People seem to have better intuition for it.
julia_code_timev = """
using $package_name
@timev begin
$(string(snoop_script));
end
@info("The above is @timev result (This has some noise).")
"""
julia_cmd_timev = `julia --project=@. -e $julia_code_timev`
@snoopi
is for discovery and for measuring just one thing, inference time. @timev
measures a whole bunch of things, including load time, LLVM time, execution time, etc. In the end it's closer to what people will want to reduce, so it's not bad to use it to decide whether the precompiles are worth it. E.g., #74.
In general I worry that too many people use SnoopCompile unthinkingly; for me, the "discovery" part is by far the most important and I generally prefer to write the precompile file by hand. But I recognize I'm probably an outlier. Recommending @timev
might help people realize that, for example, setting a higher tmin
(and thus precompiling less) might actually reduce times they care about.
People are less concerned with the precompiling time, but with its effect on the performance. Using @timev
helps with this for sure. I think I can close this, as this will be solved in #71
Is there any other timing measure that shows the effect of using precompiles (in a dynamic manner without sysimage) other than what we have in
@snoopi
andsnoopi_bot
?Is
@timev
a good measure?