Closed Moelf closed 1 year ago
I'm not sure what is going on here. But these are the results. I would assume for now that the VM has isolation, and we can disregard outside influences. Otherwise, that could be the cause.
do a warm-up run for each language (hyperfine has this option)
I doubt that would help. hyperfine
mentions that this only warms up a cache, which is useful if you're handling a lot of IO. But these operations are not IO heavy, they are CPU heavy. Thus, it wouldn't help. I think you are thinking of Lambda functions and their startup and warm up time. That behavior wouldn't be possible because every command has to boot up the environment from scratch every command invocation.
increase iterations for fast languages to reduce fluctuation
This might be an option, but how would you define "fast" and increase by how much? It would also increase the CI time.
take the minimal instead of mean as it is usually done to benchmark CPU-bounded task
That one seems to me as the best option. But I'm not sure about the min. Maybe using the median instead?
See also the discussion in #17.
This might be an option, but how would you define "fast" and increase by how much? It would also increase the CI time.
there are only so many languages we just manually pick them.
it's very obvious from the graph there's a set of tier-1 languages much faster than the rest (all statically compiled, except LuaJIT maybe)
@Moelf did you close this in favour of #60 and #59?
This doesn't make any sense (JIT languages (pypy and LuaJIT) can't take negative amount of time to start-up), I think we probably need to either
hyperfine
has this option)