laravel / pulse

Laravel Pulse is a real-time application performance monitoring tool and dashboard for your Laravel application.
https://pulse.laravel.com
MIT License
1.47k stars 171 forks source link

Pulse significantly slows down Laravel Octane when using multiple CPU cores #346

Closed AlliBalliBaba closed 8 months ago

AlliBalliBaba commented 8 months ago

Pulse Version

1.0.0-beta15

Laravel Version

11.0.7

PHP Version

8.3.3

Livewire Version

3.4.9

Database Driver & Version

mysql:8.0 Docker Image

Description

When conducting benchmarks with Laravel Octane I noticed that Pulse significantly reduces the number of requests Laravel can handle. This interestingly only occurs when Laravel Octane is running on multiple CPU cores. When running on a single core, the slowdown is negligible. This could also be an issue with Octane. I opened it here instead since Pulse seems do something that makes workers block each other. It would be beneficial to find out what exactly leads to the blocking. In the worst case Pulse just isn't compatible with Octane.

The raw benchmarks using wrk (fetching a User from mysql and returning it as a JSON, Alpine Docker inside WSL2, Laravel Octane with Swoole, 13th Gen Intel(R) Core(TM) i9-13900H 2.60 GHz):

Benchmarking without Pulse with 1 Worker (1 CPU Core)

image

Benchmarking with Pulse with 1 Worker (1 CPU Core)

image

Benchmarking without Pulse with 20 Workers (20 CPU Cores)

image

Benchmarking with Pulse with 20 Workers (20 CPU Cores)

image

Basically without Pulse the number of requests increases as the number of CPU increases (as expected) With Pulse the number of handled requests actually decreases as the number of CPU cores increases

Steps To Reproduce

timacdonald commented 8 months ago

Hi @AlliBalliBaba,

Can you please run the following command on the application you are experiencing this on and let us know the output?

php artisan tinker --execute="echo config('pulse.ingest.driver')"
timacdonald commented 8 months ago

Could you also provide the code within the /performance endpoint? What work is happening there?

AlliBalliBaba commented 8 months ago

Hey, thanks for the quick answer, php artisan tinker --execute="echo config('pulse.ingest.driver')" returns storage

The /performance endpoint just does the following, where the User is stored on the same DB as the pulse tables

public function handle()
    {
        return ['user' => User::first()];
    }

I think your suspicion is correct that the storage INGEST_DRIVER is what makes workers block each other. I will try changing the size of the ingest buffer or setting it to redis and see if it makes a difference.

AlliBalliBaba commented 8 months ago

I realized that this issue only appears when APP_ENV=local, if APP_ENV=production the slowdown is not present anymore. If someone comes across this issue in the future, just set the environment to production when doing benchmarks. I'm guessing this is more an issue with Octane than with Pulse. Closing this for now, thanks for the support

timacdonald commented 8 months ago

@AlliBalliBaba, I would certainly recommend using the Redis ingest for a performance sensitive configuration, and as you are already using Octane I can only assume that your app is performance sensitive.

Changing the APP_ENV is interesting. Do you happen to have Telescope installed, or any other debugging tools that are only active when local?

Pulse doesn't change ingest functionality based on the app.env. The only place we even reference it is when you are viewing the dashboard.

Screenshot 2024-03-28 at 09 23 00
AlliBalliBaba commented 7 months ago

I found out the blocking is generally an issue with octane or more specifically php streams. Its occurrence after installing Pulse must have been more of a coincidence that was coupled with increased logging overall.

I will post feedback here if I find any blocking that comes specifically from Pulse. Right now it seems like Pulse is fully compatible with Octane though :)

timacdonald commented 7 months ago

Great to hear. If you do notice any Pulse specific, please let us know.