MaximeMichaud / nginx-autoinstall

Compile Nginx from source with custom modules on Debian and Ubuntu
GNU General Public License v3.0
7 stars 1 forks source link

PageSpeed + Nginx 1.25.2 (HTTP3) [NOT A BUG] #2

Open mtx-z opened 11 months ago

mtx-z commented 11 months ago

Hello,

did you manage to build Nginx 1.25.2 and Pagespeed?

I tried the "dirty" instruction here without success: https://github.com/apache/incubator-pagespeed-ngx/issues/1760

I'm asking myself if it's still worth it to try to make it work, as it's officially abandoned. But it was quite useful to optimize on-the-fly "poorly optimized" websites, and there's no alternative...

Other useful links: https://github.com/karljohns0n/nginx-more/issues/40 https://deb.myguard.nl/nginx-modules/ claims to use a custom PSOL for pagespeed. I should ask the autor about 1.25.2.

I'm working on a quick fork for myself based on your maintained version of the nginx-autoinstall from angristan one i'm used to work with, and this (for boring SSL) . It fails if I try to add PageSpeed from this way.

MaximeMichaud commented 11 months ago

Hello, I'll try to answer as best I can.

I no longer use the pagespeed module for several reasons. It's slow, heavy, and has been abandoned. Additionally, it sometimes had issues in production, requiring the cache to be manually cleared.

I would recommend you consider an alternative solution. Depending on your skills, there could be multiple options. It all depends on the optimizations you need. If it's something basic like reducing the size of js/css files, you might want to look into Cloudflare. Otherwise, to replicate "on-the-fly" the optimizations that pagespeed provided, you could consider using serverless. More specifically, using workers that would execute the optimizations based on the code you provided. With Cloudflare, you can achieve this and get it up and running quickly, depending on your needs. However, depending on the website traffic, the workers might incur some costs (especially if it's for every request on the site). I personally use workers to lazyload YouTube players and only load them if the user wants to.

If you truly wish to use PageSpeed, I'm not really in a position to help if you want to implement it on a recent and functional version. Many modules haven't been updated, and there have been breaking changes. If you truly want HTTP3 + PageSpeed, you might consider using the Cloudflare patch and opting for an older stable version of nginx to get pagespeed working.

Hope this helps :)

GwynethLlewelyn commented 8 months ago

Hi, I just wanted to add that I totally subscribe to Maxime's comments.

I'm in the slow process of removing PageSpeed from all my websites and ultimately from nginx itself (hopefully, one of the more recent versions than the ones @angristan originally supported...), because I came to the same conclusion as Maxime.

After doing several tests using a very different setup — bypassing PageSpeed and "forcing" Cloudflare to cache way more things (by default, it just caches media, CSS and JS), using the 'new' Cache Rules (as opposed to the 'old' Page Rules), I got orders of magnitude of better performance (reaching grade A on GTMetrix, scoring above 97% — my first ever for a production website!), while, at the same time, the many quirks introduced by PageSpeed naturally disappeared completely, making everything so smoother.

It is my understanding that now the recommendations for serving web pages are almost the opposite of what they were, say, 7 or 8 years ago. Back then, in the glorious HTTP/1.1 days, we were told to pull everything from cloud-based CDNs (e.g., JS, CSS, webfonts...) and keep as little on 'our' own servers, and even that should be heavily compressed/minified/reduced, possibly condensed together into a single file (to avoid multiple connections), and so forth.

Now we all live in the post-Let's Encrypt era with universal HTTP/2 support (and plenty of HTTP/3 support), and so it's much more efficient to funnel everything through a single, optimised pipe — and host everything on "our" side as well. For that, we rely on Cloudflare, to push all content to the edge, as close as possible to the requester. This has some implications on how we "prepare" things for Cloudflare, and it's far more important to get all proper cache headers correctly sorted out than saving a handful of bytes here and there (which was what PageSpeed used to do so well); Cloudflare does the rest.

Also, Cloudflare fully supports HTTP/3 and manages it flawlessly

Granted, there are some reasons for not using Cloudflare and serving content directly from nginx (for instance, for a few domain names that Cloudflare doesn't support), and I suppose that, for those cases, it still makes sense to compile the PageSpeed module...

angristan commented 8 months ago

I agree! I stopped using PageSpeed a long time ago, and eventually, Nginx in favour of a more straightforward yet performant setup using Cloudflare and Caddy (which has HTTPS by default ❤️).

MaximeMichaud commented 8 months ago

I agree! I stopped using PageSpeed a long time ago, and eventually, Nginx in favour of a more straightforward yet performant setup using Cloudflare and Caddy (which has HTTPS by default ❤️).

In the ongoing debate between Caddy and NGINX within the web server arena, I believe both have their unique strengths. While NGINX is widely acclaimed for its speed, flexibility, and comprehensive documentation, I feel that Caddy's merits often go underappreciated. Personally, I find Caddy intriguing not just for its default HTTPS implementation – a feature I consider basic for my requirements – but more so for its HTTP/3 support and innovative design elements, including the choice of programming language and development approach.

However, from my experience, Caddy isn't without its shortcomings. For instance, it lacks some native functionalities, like Brotli support, which was apparent to me from the outset.

For scenarios requiring detailed customization and specific features, I still see NGINX as a robust solution. I am keenly looking forward to NGINX integrating HTTP/3 as seamlessly as LiteSpeed or Caddy. Such an enhancement would significantly elevate its existing feature set. Regarding my use of Cloudflare, I prefer to use their direct certificate (not Universal), which in my opinion, is an optimal solution.

MaximeMichaud commented 8 months ago

Hi, I just wanted to add that I totally subscribe to Maxime's comments.

I'm in the slow process of removing PageSpeed from all my websites and ultimately from nginx itself (hopefully, one of the more recent versions than the ones @angristan originally supported...), because I came to the same conclusion as Maxime.

After doing several tests using a very different setup — bypassing PageSpeed and "forcing" Cloudflare to cache way more things (by default, it just caches media, CSS and JS), using the 'new' Cache Rules (as opposed to the 'old' Page Rules), I got orders of magnitude of better performance (reaching grade A on GTMetrix, scoring above 97% — my first ever for a production website!), while, at the same time, the many quirks introduced by PageSpeed naturally disappeared completely, making everything so smoother.

It is my understanding that now the recommendations for serving web pages are almost the opposite of what they were, say, 7 or 8 years ago. Back then, in the glorious HTTP/1.1 days, we were told to pull everything from cloud-based CDNs (e.g., JS, CSS, webfonts...) and keep as little on 'our' own servers, and even that should be heavily compressed/minified/reduced, possibly condensed together into a single file (to avoid multiple connections), and so forth.

Now we all live in the post-Let's Encrypt era with universal HTTP/2 support (and plenty of HTTP/3 support), and so it's much more efficient to funnel everything through a single, optimised pipe — and host everything on "our" side as well. For that, we rely on Cloudflare, to push all content to the edge, as close as possible to the requester. This has some implications on how we "prepare" things for Cloudflare, and it's far more important to get all proper cache headers correctly sorted out than saving a handful of bytes here and there (which was what PageSpeed used to do so well); Cloudflare does the rest.

Also, Cloudflare fully supports HTTP/3 and manages it flawlessly

Granted, there are some reasons for not using Cloudflare and serving content directly from nginx (for instance, for a few domain names that Cloudflare doesn't support), and I suppose that, for those cases, it still makes sense to compile the PageSpeed module...

Indeed, the landscape of web page serving has evolved, even with the advent of HTTP2 and HTTP3. While there are still benefits to combining CSS/JS files, it's crucial to execute this process correctly, as improper implementation is more likely to introduce issues than advantages. It can be effective if you're precisely controlling the process and merging requests that load simultaneously and within the same context. Otherwise, it could slow down page loading as it may require loading certain elements in their entirety before the page becomes usable.

I moved away from PageSpeed a while ago and haven't looked back. The switch has significantly sped up my compilation processes and the restart/reload times for NGINX. In my experience, the only other module that rivals PageSpeed in terms of inducing latency is ModSecurity.

For those with Cloudflare Pro, you can enable rules that are essentially equivalent to ModSecurity. And even if you don't have Cloudflare Pro, I highly recommend Crowdsec. It’s effective, straightforward to implement, and generally yields very few false positives, depending on your configuration.

The universal adoption of HTTP2/HTTP3 isn't a given. While NGINX is a staple on all my web servers, for an exceptional reverse proxy on par with NGINX, my trust lies with HAProxy.