craftcms / cms

Build bespoke content experiences with Craft.
https://craftcms.com
Other
3.28k stars 634 forks source link

Slow response time and 504 timeouts in Laravel Valet. #6289

Open havardlj opened 4 years ago

havardlj commented 4 years ago

Looks like more and more users, including me, are having the same issue running Craft 3 with Laravel Valet. Seems like Craft 2 + other CMS are not having these issues. The developers over on the github issue-forum for Valet are pretty much out of ideas, so I was wondering if anyone has any input on this here? I absolutely love Valet, and I don't want to go back to MAMP or Homestead.

https://github.com/laravel/valet/issues/966 https://github.com/laravel/valet/issues/965 https://github.com/laravel/valet/issues/744

brandonkelly commented 4 years ago

Have you tested the same site on MAMP/Homestead/Nitro to be sure it’s definitely a Valet-specific issue, as opposed to the app just generally running slowly?

havardlj commented 4 years ago

The first time I had this problem, I was running Homestead as well, and I had no issues there. I just downloaded Nitro (looks awesome) and my sites load instantly there, still slow with Valet.

havardlj commented 4 years ago

I'll probably just change everything over to Nitro when mkcert gets implemented.

brandonkelly commented 4 years ago

Are you using a .local host name by chance?

havardlj commented 4 years ago

No, .test

brandonkelly commented 4 years ago

Is Xdebug enabled? If so, does it speed up when it’s disabled?

havardlj commented 4 years ago

No, it is not enabled.

brandonkelly commented 4 years ago

Huh. Make sure Dev Mode is enabled, then clear out your storage/logs/ folder, and try to reproduce the issue. Keep clearing out storage/logs/ between each request until you are able to reproduce the slow response time. Once you have, try searching through storage/logs/web.log and check the timestamps of each of the log entries. If you can find a couple entries with several seconds between them, that should give you a hint as to what is taking so long.

havardlj commented 4 years ago

I saved my biggest matrix-field, and I got a 40 second response time, but all the timestamps in my logfile (850kb from one save) are evenly distributed through those 40 seconds.

nettum commented 4 years ago

For the record I'm experiencing Craft CMS 3 to be (a lot) slower on heroku than other hostingproviders. Saving a big matrix always times out (heroku router terminates the requests after 30secs).
I'm suspecting it's either the round trips for all the calls to the database (we use mysql and jawsdb on heroku) or some heavy file i/o going on that the ephemeral filesystem doesn't like.

Also, I've noticed that requests (both frontend and cp) to craft locally from colleagues that are running windows is generally a lot slower than on linux/os x. This is probably because of the windows filesystem (Hello there, WSL2 :heart:).

Not sure if this is relevant at all, but if we can find some common bottlenecks/setups maybe it can be, so thought it was worth sharing even though it's not related to laravel valet.

jasonmccallister commented 3 years ago

@nettum are you able to setup https://blackfire.io to profile the requests on Heroku? It might be an issue with the DB instance but profiling the slow requests can generate exactly whats hurting performance. It might be a PHP configuration on Heroku you can tweak to improve performance but a profiled request would really narrow it down.

There is also a helpful guide to improve PHP performance on Heroku.

danethomas commented 3 years ago

@nettum did you ever get to the bottom of your Craft slow performance on Heroku?

nettum commented 3 years ago

@danethomas Unfortantly not. We had a couple of craft projects on heroku because we needed juice to tacle spikes in traffic and have had success with heroku with other stacks Like symfony+mysql, node+mongodb etc.

Used the same stack for craft as with symfony (x number of standard or performance dynos, heroku redis and jawsdb mysql (with ssd), but unfortunately admin stuff was very slow and did not get much performance-boost on the frontend either.

Had to move it all away before I got time to do enough debugging to find the bottleneck. My gut feeling says it's probably related to the database, but I can't be sure. It could because craft fires a lot of queries and maybe there is some latency between heroku and jawsdb (even though it is the same region). Or that craft uses the filesystem excessively and heroku don't like that. :man_shrugging:

Not sure if others that use craft on heroku with success (and a lot of traffic / medium sized dbs) uses mysql, maria or postgres?

Are you having similar issues?

timkelty commented 3 years ago

@nettum @danethomas I've seen the same bottleneck on Heroku, bad enough where it was functionally inoperable with any amount of traffic. After some analysis in NewRelic, it was clear the bottleneck was happening at the database (in this case, Heroku Postgres).

Bumping the Postgres tier significantly up barely seemed to help, however, the moment we switched the database to connect to an AWS RDS database instead, all was well.

I had assumed it was something going on with Heroku Postgres, but it sounds like @nettum is seeing similar results with the mysql via jawsdb…are you using the Heroku addon for that?

nettum commented 3 years ago

Yes, but not the native heroku one as it only supports postgres. Was using JawsDB (https://elements.heroku.com/addons/jawsdb) and the Thresher plan (single tenant and ssd). For me it seemed like the queries itself was fast, but something in between are taking up time...

danethomas commented 3 years ago

Ok we're using JawsDB as well and did some basic checking earlier in the week around making sure it was in the same region etc. We haven't spent much time (yet) debugging things but we're very much experiencing the same issue.

We've enabled Blitz with Redis and Cloudflare and that's helped the front end a lot (not without some challenges) but the adminCP is still painfully slow and saving entries (particular those with decent sized matrix's) will hit Heroku's 30s max request limit and time out).

We'll look more into it.

jasonmccallister commented 3 years ago

I wonder if it's possible to get Blackfire on Heroku and perform a couple of profiles to see if we can isolate the application code?

danethomas commented 3 years ago

We're currently running things in Docker so not using the default Heroku php webpack but it looks like it's available by default https://blog.blackfire.io/better-heroku-integration.html

I'll see if we can get Blackfire up and running and report back.

danethomas commented 3 years ago

@jasonmccallister is there a way to completely prevent craft from writing logs to the filesystem? We've got log entries installed but I do recall having a plugin (Geomate) running in debug mode for a little while and things really ground to a halt then. I suspect it could be related to writing logs to the filesystem.

We're going to enable Datadog (which we use elsewhere) for some profiling.

jasonmccallister commented 3 years ago

@danethomas we have the following settings to help with container based deployments:

  1. https://craftcms.com/docs/3.x/config/#craft-ephemeral
  2. https://craftcms.com/docs/3.x/config/#craft-stream-log

Hopefully that helps. I think @timkelty is going to setup an environment to test and profile and we can review internally and let you know what we find!

Stream log sends additional logs to output, you should be able to disable the logs completely by overriding the logging component but I have not tried that yet.

timkelty commented 3 years ago

@danethomas there isn't a direct Craft config to disable file logging (though might be a nice feature request), but you can do it via app.php like so:

<?php
use craft\helpers\App;
use craft\log\Dispatcher;

return [
    'components' => [
        'log' => function() {
            $dispatcher = new Dispatcher();
            $dispatcher->targets[Dispatcher::TARGET_FILE]->enabled = false;

            return $dispatcher;
        }
    ]
];