Open pushkarnk opened 5 years ago
@pushkarnk this looks like some kind of stall to me. On the 50th percentile (median) kitura/kitura are pretty much the same, but the standard deviation is huge! So either the machine that was running the tests was super busy at some point when running kitura-nio or something stalled/got stuck for a while?
FYI -> docker
is in use on above results, then metrics are not very useful (could vary by ~40% )
I'm working on https://github.com/the-benchmarker/web-frameworks/pull/632 to use an isolated droplet
and therefore to reduce docker
noise
PS : If I can also have sponsored machine from IBM, I'll be happy to run Kitura
benchmarks on :heart:
oh docker with networking i/egress networking? Yeah, that'll give you a bunch of variance.
In the future, it will be on dedicated VM with a 2 GB/s link
FYI, we can replicate a gap between Kitura-NIO and Kitura-net, and that definitely needs investigation. Previous investigations have shown Kitura-NIO to scale better than Kitura-net, but it's more CPU hungry so slower at lower CPU counts - we need to find out where that CPU is being burnt.
However, @helenmasters and I couldn't replicate the general slowness of Kitura vs. Vapor running the benchmark on a bare-metal Ubuntu 16.04 server, once we removed the Ubuntu version penalty (https://github.com/the-benchmarker/web-frameworks/pull/1346). We limited the Docker containers to a subset of CPUs to emulate the 8-cpu environment, and Vapor and Kitura came out similar in performance (results summarised in https://github.com/IBM-Swift/Kitura/issues/1448).
Perfect did show a lead over the others in our environment, similar to the one reported here, so the puzzle is why our local results for Kitura are substantially better than the ones published above.
Kitura with Kitura-NIO seems to performs badly on the web-frameworks benchmarks
Reported over slack. Copied from there: