Closed AlliBalliBaba closed 8 months ago
Hi @AlliBalliBaba,
Can you please run the following command on the application you are experiencing this on and let us know the output?
php artisan tinker --execute="echo config('pulse.ingest.driver')"
Could you also provide the code within the /performance
endpoint? What work is happening there?
Hey, thanks for the quick answer,
php artisan tinker --execute="echo config('pulse.ingest.driver')"
returns
storage
The /performance
endpoint just does the following, where the User is stored on the same DB as the pulse tables
public function handle()
{
return ['user' => User::first()];
}
I think your suspicion is correct that the storage INGEST_DRIVER
is what makes workers block each other. I will try changing the size of the ingest buffer or setting it to redis and see if it makes a difference.
I realized that this issue only appears when APP_ENV=local
, if APP_ENV=production
the slowdown is not present anymore. If someone comes across this issue in the future, just set the environment to production when doing benchmarks. I'm guessing this is more an issue with Octane than with Pulse.
Closing this for now, thanks for the support
@AlliBalliBaba, I would certainly recommend using the Redis ingest for a performance sensitive configuration, and as you are already using Octane I can only assume that your app is performance sensitive.
Changing the APP_ENV is interesting. Do you happen to have Telescope installed, or any other debugging tools that are only active when local
?
Pulse doesn't change ingest functionality based on the app.env
. The only place we even reference it is when you are viewing the dashboard.
I found out the blocking is generally an issue with octane or more specifically php streams. Its occurrence after installing Pulse must have been more of a coincidence that was coupled with increased logging overall.
I will post feedback here if I find any blocking that comes specifically from Pulse. Right now it seems like Pulse is fully compatible with Octane though :)
Great to hear. If you do notice any Pulse specific, please let us know.
Pulse Version
1.0.0-beta15
Laravel Version
11.0.7
PHP Version
8.3.3
Livewire Version
3.4.9
Database Driver & Version
mysql:8.0 Docker Image
Description
When conducting benchmarks with Laravel Octane I noticed that Pulse significantly reduces the number of requests Laravel can handle. This interestingly only occurs when Laravel Octane is running on multiple CPU cores. When running on a single core, the slowdown is negligible. This could also be an issue with Octane. I opened it here instead since Pulse seems do something that makes workers block each other. It would be beneficial to find out what exactly leads to the blocking. In the worst case Pulse just isn't compatible with Octane.
The raw benchmarks using wrk (fetching a User from mysql and returning it as a JSON, Alpine Docker inside WSL2, Laravel Octane with Swoole, 13th Gen Intel(R) Core(TM) i9-13900H 2.60 GHz):
Benchmarking without Pulse with 1 Worker (1 CPU Core)
Benchmarking with Pulse with 1 Worker (1 CPU Core)
Benchmarking without Pulse with 20 Workers (20 CPU Cores)
Benchmarking with Pulse with 20 Workers (20 CPU Cores)
Basically without Pulse the number of requests increases as the number of CPU increases (as expected) With Pulse the number of handled requests actually decreases as the number of CPU cores increases
Steps To Reproduce