Open juliolitwin opened 4 years ago
Hi,
Yes, sorry about that. Zerio is currently depending on the not yet released UnmanagedDisruptor<T>
and UnmanagedRingBuffer<T>
types. My colleague (and accomplice) @ocoanet will release a new version of the disruptor soon, but until then, you need to clone the disruptor-net repo locally and reference the project manually to make Zerio compile.
I will reference the nuget package as soon as possible.
I'm currently working on updating the readme to document the main design changes.
Hi @rverdier ,
Currently (due to the current state of the experimental project) is high CPU consumption expected? When I turn on the server and client, my cpu reaches 90% usage (i7).
Thanks.
Hello,
Currently, Zerio uses 3 threads by peer:
RequestProcessor
(first event handler of the disruptor) responsible for dispatching I/O requests to the RIO request queueSendCompletionProcessor
(second and last event handler of the disruptor) responsible for polling send request completionsReceiveCompletionProcessor
, acting as the reception loop (polling receive request completions and resubmitting receive requests after incoming message handling)With the current implementation, the CPU usage is expected to be quite high (especially if you run both the client and the server on your local machine), because I use a very aggressive WaitStrategy
in the disruptor: the BusySpinWaitStrategy
. You can play with others wait strategies by modifying the RequestProcessingEngine.CreateDisruptor
method.
Also, you can reduce the aggressiveness of the reception loop using a SpinWait
in ReceiveCompletionProcessor.ProcessCompletion
:
var spinWait = new SpinWait();
while (_isRunning)
{
var resultCount = completionQueue.TryGetCompletionResults(results, maxCompletionResults);
if (resultCount == 0)
{
spinWait.SpinOnce();
continue;
}
for (var i = 0; i < resultCount; i++)
{
var result = results[i];
var sessionId = (int)result.ConnectionCorrelation;
var bufferSegmentId = (int)result.RequestCorrelation;
OnRequestCompletion(sessionId, bufferSegmentId, (int)result.BytesTransferred);
}
spinWait.Reset();
}
Thanks again for answering @rverdier .
I need to test the settings of the Disruptor, because working with other servers on the same machine and with high use of cpu is a bit complicated, but Zerio is amazing!
I'm wanting to use Zerio without Zerio's own client dependency (because I would like to use with Unity and trying to use disruptor and other reflection dependencies will only bring more headaches due to il2cpp and the high cpu usage , it is not a serious problem for the server to use so much cpu, but the client is already another story). Is there anything that can be done without too many headaches to adapt to zerio (using a simple network tcp layer)?
Happy New Year! Regards.
The purpose of having busy waiting strategies is to have better latencies. But if you do not have strong requirements on this side you can very easily get the CPU usage of Zerio down to almost 0 (when idle) using blocking strategies for example, and a SpinWait
in the reception loop.
Note that I pushed a new HybridWaitStrategy
that only busy spins for the first, more critical, RequestProcessor
event handler.
Is there anything that can be done without too many headaches to adapt to zerio (using a simple network tcp layer)?
It is very easy to write a simple TCP client for Zerio (you could take example on the TcpFeedClient
), you don't have to use RIO on the client side. The only "protocol" constraint is that ZerioServer
will echo back the first message it receives as a handshake.
Thanks, you are very receptive. xD
Problem I am checking now is that I can not connect 2 sessions at the same time, even changing sessioncount, winsocket is returned error 10055.
No problem! And yes, there is currently an issue with the queue sizing. I'm working on it and will try to push a fix soon.
@jclitwin It should be a bit better now. Don't forget to change the ZerioConfiguration.SessionCount
if you want more than 2 client sessions. Right now session contexts are preallocated so you have to set the max number of session a ZerioServer
can handle concurrently.
@rverdier woow! It is working perfectly!
There are some buffers that are initialized but their sizes are not defined by ZerioConfiguration itself, is that purposeful? Example: MessageFramer, TcpFrameReceiver, TcpFrameSender.
Do you have any kind of roadmap and you know current bugs or improvements? Apparently Zerio already looks stable.
I am willing to risk adapting Zerio into my project with #if and see how the performance will differ. Thanks again with this great work!
Yes, I need to make more values configurable. The TCP based implementations will be used for temporary benchmarks but I will get rid of them at some point.
I have no roadmap yet but I plan to work full time on the project next week. I guess I'll see things more clearly after that and maybe I will be able to update the documentation and create a bunch of issues directly on Github.
Would it be a good idea to add disruptor configuration to the ZerioServer like in the constructor? In order to choose the type of WaitStrategy. Because there are types of servers that are not so crucial to be so aggressive (example something like the Login Server) and not having to change directly in the core.
@rverdier there is a problem when you start the server and client and close the client, you can't connect a new client.
I just pushed new configuration options to make the disruptor wait strategy configurable, as well as the polling wait strategies used both for sends and receives.
If you want minimal CPU usage when idle, you can try these settings for example:
config.RequestEngineWaitStrategyType = RequestEngineWaitStrategyType.BlockingWaitStrategy;
config.ReceiveCompletionPollingWaitStrategyType = CompletionPollingWaitStrategyType.SpinWaitWaitStrategy;
config.SendCompletionPollingWaitStrategyType = CompletionPollingWaitStrategyType.SpinWaitWaitStrategy;
there is a problem when you start the server and client and close the client, you can't connect a new client.
Yes, I'm aware of it. Both ZerioServer
and ZerioClient
are pretty broken regarding starts and stops for now; I'll be addressing these issues this week I think. I'm afraid you'll have dispose and re-instantiate the client each time you want to reconnect in the meanwhile.
@rverdier
Yo,
Was the project stopped? Cheers.
I'm not actively working on the project these days, but it doesn't mean it is "stopped".
Thanks for answering. I see enormous potential at Zerio, which is why I was concerned that the project had been stopped.
The purpose of having busy waiting strategies is to have better latencies.
Because of the delay, I chose the UDP protocol. The delay problem is actually a TCP protocol problem. Compared with the delay caused by the network protocol, the delay caused by the program logic can be ignored.
UDP protocol combined with aggressive ARQ algorithm can achieve very good results(less dela/latency ). For example, this protocol is https://github.com/skywind3000/kcp.
For me, if RIO can be used to provide higher throughput, also reducing CPU utilization is the most important goal. Although if you are willing to spend money, server resources are abundant, but all this requires more cost-money.
Yo, again ~
Zerio is using some custom Disruptor? Because now in master branch when load the project it's failed to load Disruptor.
Cheers.