Closed OvermindDL1 closed 6 years ago
IRC (RX14) states, and I quote:
the gist of it is passing reuse_port: true to the
listen
and then just spawning nproc --all processes in a bash loop lol
:-)
As I mentioned at somewhere, reusing ports is Crystal level's implementation. Not framework or middleware. So I think the tuning should not be added to each framework.
Correct, it's not a framework/middleware thing, but rather they have to be sharded out at the OS level, similar to Python and NodeJS setups.
At the very least, the current benchmarker absolutely should not be written in Crystal.
@OvermindDL1 or should be some bindings to wrk
in Crystal
:stuck_out_tongue_winking_eye:
That works indeed. As long as the actual 'benchmarking' code is not written in crystal. ^.^
@OvermindDL1 for sure.
More I think that running wrk
as our benchmarking tool could be done two ways
wrk
as parsing the stdout / stderr binding
to have a proper communication with wrk
either by calling wrk as parsing the stdout / stderr
That's what my script does. I should probably PR it so it can be used as an example or something...
or by writting binding to have a proper communication with wrk
I've never actually checked, can wrk
be used as a library?
I close, since wrk
is now used, and #69 is open for discussion (SO_REUSEPORT
)
I was speaking to the devs in the Crystal IRC channel and they state that Crystal is absolutely, positively, NOT parallel. It is concurrent sure, but it will not use more than 1 core (the GC can eat a bit more though).
They state that servers written in crystal need to be sharded across processes just like the node cluster_express server runs. And the benchmarking client absolutely positively should NOT be written in Crystal, and they even stated
wrk
should be used instead too. ^.^So yeah, we need to use
wrk
instead, and the current crystal servers need to be re-made to run in sharded processes with an SO_REUSEPORT acceptor pool. Is there anyone that can do that? :-)