Open gdalle opened 1 year ago
We can speed up loading of InferOpt by replacing ThreadsX with built-in threads:
res = ThreadsX.map(f(i) for i in 1:n)
would become
f1 = f(1) res = Vector{typeof(f1)}(undef, n) res[1] = f1 @threads for i in 2:n res[i] = f(i) end
Downsides:
1 + (n-1)/t
n/t
t
Upsides:
Also beware that @threads sometimes introduces type instabilities due to boxing
@threads
https://github.com/JuliaLang/julia/issues/41731
We can speed up loading of InferOpt by replacing ThreadsX with built-in threads:
would become
Downsides:
1 + (n-1)/t
instead ofn/t
fort
threadsUpsides: