Closed cayolblake closed 2 years ago
Are you using Swoole or Roadrunner? And how long are these queries taking normally, if executed synchronously?
Using Swoole, and everything is at latest versions.
These queries takes a negligible amount of time, a few microseconds.
I also checked storage/logs/swoole_http.log
for errors and applied lots of changes to workers and task-workers numbers without any effect so I rolledback to no config at all.
What could be the issue! The only difference happens when I use Octane::concurrently
.
Creating the tasks and shipping them to a separate worker also takes some time, so it might be better to just sequentially perform queries if they return within milliseconds. If you have expensive queries on the other hand (like a multi-second query) or you need to perform multiple outgoing HTTP calls, executing them concurrently makes totally sense.
The overhead of using Octane::concurrently()
can be made visible very easily:
// {"duration": 4000.1609325408936}
Route::get('/without-concurrency', function () {
$start = microtime(true);
sleep(4);
return response()->json(['duration' => (microtime(true) - $start) * 1000]);
});
// {"duration": 2004.734920501709}
Route::get('/with-concurrency', function () {
$start = microtime(true);
Octane::concurrently([
fn () => sleep(2),
fn () => sleep(2),
]);
return response()->json(['duration' => (microtime(true) - $start) * 1000]);
});
The overhead may not always be equally visible. It might be worth mentioning that the tasks workers need a bit more time to execute the concurrent tasks upon startup. Above number goes down to approximately 2001.5 ms
on my machine after a few executions.
Oh, almost forgot: the time it takes to ship your asynchronous tasks to the workers also depends on the size of input and output due to serialization and communication.
That would render Swoole coroutines only useful in terms of unreliable network calls or heavy/multi disk IO operations or some seconds-lengthy DB queries.
Could the reason be the task worker needs some warming up? Or loading some context in the thread/process it's working through? Not sure if it's using some shared memory technique to access required data ahead of work or it gets forked like good old threads.
I think the main Swoole server communicates with the task workers using UNIX sockets, which should be quite fast. The task workers also need to warm up, hence my comment above about the measured duration decreasing slightly after a few runs.
What database driver are you using and what operating system?
Hi there,
Thanks for reporting but it looks like this is a question which can be asked on a support channel. Please only use this issue tracker for reporting bugs with the library itself. If you have a question on how to use functionality provided by this repo you can try one of the following channels:
However, this issue will not be locked and everyone is still free to discuss solutions to your problem!
Thanks.
I think the main Swoole server communicates with the task workers using UNIX sockets, which should be quite fast. The task workers also need to warm up, hence my comment above about the measured duration decreasing slightly after a few runs.
What database driver are you using and what operating system?
MySQL using mysqli, Linux.
Hi there,
Thanks for reporting but it looks like this is a question which can be asked on a support channel. Please only use this issue tracker for reporting bugs with the library itself. If you have a question on how to use functionality provided by this repo you can try one of the following channels:
* [Laracasts Forums](https://laracasts.com/discuss) * [Laravel.io Forums](https://laravel.io/forum) * [StackOverflow](https://stackoverflow.com/questions/tagged/laravel) * [Discord](https://discordapp.com/invite/KxwQuKb) * [Larachat](https://larachat.co) * [IRC](https://web.libera.chat/?nick=laravelnewbie&channels=#laravel)
However, this issue will not be locked and everyone is still free to discuss solutions to your problem!
Thanks.
As per one of Swoole's maintainers, this is probably a closure performance issues in Octane's implementation so I guess it qualifies as an issue that needs reporting.
Hello,
Using the following code to execute two functions that each of them hits the database using different models for simple SELECT queries, this resulted in
380 req/s
on testing withab
.While, before using
Octane::concurrently()
I was executing them sequentially in the same order and performance was600 req/s
.Any idea what might be going here? I have no special config or added any config for Octane whatsoever.