Closed hostingnuggets closed 4 years ago
Purging all URL's by hitting /purge
is not possible because which URL is cached is not clear.
It's better you use group permission hack. Another thing could be setting up 0777
permission on cache folder. This may still need tweaking as new files and subfolders created under cache folder may not have 0777
.
@MiteshShah any suggestion on this?
I understand that it's not possible to use /purge for clearing all the pages from the cache but then the permission hack which mentions using read/write for everyone is really a NOGO. I am hosting a few websites with different users and it would be much too dangerous as anyone could purge the whole website of another user.
It would be great if you could come up with a better solution else your plugin is useless in a multi-user hosting environment as in my case.
Sorry. There is no alternate solution. Nginx allows only a single zone and inside a zone you cannot differentiate cache.
May be you can upgrade to nginx plus version - http://nginx.com/products/
They have wildcard purging in plus version. It can help you purge cache for a whole site.
Also, our easyengine project will have something for this in future - https://github.com/rtCamp/easyengine/issues?milestone=9&state=open
Then I only see one way: implementing a special URL which would purge all pages recursively. IMHO this should be possible.
Not possible with https://github.com/FRiCKLE/ngx_cache_purge - the module which handles URL based purge. It supports purging only one request. No wildcard or recursion.
Nginx-helper recursive cache clean doesn't use this module but simply deletes cache files from server.
I agree that it is not possible using this specific nginx cache module but there is a trivial workaround: using wordpress API simply get a list of all pages / posts and then loop on each of them in order to send an individual purge for each.
On large sites it won't work in a good way. Because if you have 1000 pages, then effectively you are making 1000 purge calls. The PHP process may itself timeout before loop gets over.
In easyengine, we have plan to run a cron job, which just makes cache files world writable on that fly. Like every minute. That way cache can be cleared by all.
Another option is - https://github.com/wandenberg/nginx-selective-cache-purge-module/ (not tested but on to-do list). This will require nginx re-compilation.
Even on large sites with 1000 pages it would be feasible within the default max_execution_time of PHP of 30 seconds. Instead of running each purge call serially you simply run them in parallel, maybe 50 at a time, that should not be a problem.
I still think it is a bad idea having the cache files world read/writable as it would allow anyone on the same server to purge the cache. On top of that I really think it is not efficient having a cron job running every minute running doing a chmod a+rw on a whole cache directory structure.
In the meantime I have opened issue FRiCKLE/ngx_cache_purge#19 as a feature request for having a URL which could purge the whole cache in one go.
Even on large sites with 1000 pages it would be feasible within the default max_execution_time of PHP of 30 seconds. Instead of running each purge call serially you simply run them in parallel, maybe 50 at a time, that should not be a problem.
If you think this is feasible, feel free to send pull-request. As far as I know, PHP is not good at parallel processing and codes will be a lot more complicated.
We will instead get redis-backed nginx module working for easyengine shared-hosting release.
In our production setup, we have frontend cacheing happening on completely separate nodes to our PHP-FPM instances, so deleting local cache directories is never an option for us. While ultimately an improvement to the nginx module to have a purge all URL would be preferable, having some kind of stop-gap support in the form of an option that allows you to choose to send multiple purge requests when issuing an 'purge all' command would be helpful. Using CURLs curl_multi_init and batching requests should help in that regard quite a bit.
Here is the solution I came up with for our setup.
We use sockets to pass to different pools like this
fastcgi_pass unix:/home/php-fpm/sock/$user.sock;
We run nginx under user apache but could be whatever user you use.
We have a [default] pool that runs as apache running on.
/home/php-fpm/sock/apache.sock
So in our config we just need to add the following line to pass to apache (this is simplified, we use the later more secure config)
### simple
if ( $args ~ 'nginx_helper_urls=all' ) { set $user 'apache'; }
or
### more secure (also disables caching for backend)
location ~ ^/wp-admin/.*\.php$ {
try_files $uri =404;
if ( $args ~ 'nginx_helper_urls=all' ) { set $user 'apache'; }
### pass fastcgi cache directory on to plugin
fastcgi_param RT_WP_NGINX_HELPER_CACHE_PATH $fastcgi_cache_dir;
### admin still need to runs as regular user as well
fastcgi_pass unix:/home/php-fpm/sock/$user.sock;
fastcgi_index index.php;
include fastcgi_params;
}
I know it's been a few years, but any updates on this? Is there a better way?
Hello everyone! Any update on this matter? Thanks
Did anyone find a solution for this?
Closing this issue is stale and we are not planning to support of PHP-FPM with another user.
I hit this problem, and solved it by using one of the scripting languages supported by nginx -- either Lua or Perl will do.
https://snippets.webaware.com.au/snippets/purging-nginx-cache-users-differ/
I've figured out how both Nginx and PHP should have the necessary access to the cache directory through the common group. Assuming your PHP-FPM process is running as example-user, follow these steps:
1. Create a Common Group:
Create a new group, let's call it example-user-cache
:
sudo groupadd example-user-cache
2. Add Users to the Group:
Add both the www-data
(Nginx) and example-user
(PHP) users to the example-user-cache
group:
sudo usermod -aG example-user-cache www-data
sudo usermod -aG example-user-cache example-user
This command appends the specified group (example-user-cache
) to the list of supplementary groups for each user.
3. Set Group Ownership: Change the group ownership of the cache directory to the newly created group:
sudo chown -R www-data:example-user-cache /var/run/nginx-cache/mywebsite.com
4. Set Group Permissions: Set the group permissions on the cache directory:
sudo chmod -R 770 /var/run/nginx-cache/mywebsite.com
sudo chmod -R g+rwxs /var/run/nginx-cache/mywebsite.com
This allows both the Nginx user and the PHP user in the example-user-cache
group full access to all current folders and files.
5. Use ACL to force permissions: Install the ACL package:
sudo apt install -y acl
Now set the correct permissions using ACL:
sudo setfacl -Rm u:www-data:rwx,g:example-user-cache:rwx,d:u:www-data:rwx,d:g:example-user-cache:rwx /var/run/nginx-cache/mywebsite.com
This allows both the Nginx user and the PHP user in the example-user-cache
group full access to all future folders and files.
6. Workaround for Nginx bug:
Unfortunately, there seems to be a bug in Nginx when deleting the cache folders inside /var/run/nginx-cache/mywebsite.com
and let Nginx recreate them after a page visit. When this happens, the directories and files don't inherit the full permissions set with ACL. To fix this, we can use the package inotify-tools
to check for changes and automatically reapply the ACL permissions.
First, install the package:
sudo apt install -y inotify-tools
Then, create the script which checks all immediate directories for changes and reapplies the ACL permissions:
sudo nano /usr/local/bin/monitor_fastcgi_permissions.sh
Paste the following content into that file:
#!/bin/bash
# Directory to monitor
directory="/var/run/nginx-cache/mywebsite.com"
# Function to set permissions based on changed directory
set_permissions() {
# Get the user and group of the changed directory
changed_dir=$1
user=$(stat -c "%U" "$changed_dir")
group=$(stat -c "%G" "$changed_dir")
# Set permissions for the changed directory and its contents
setfacl -Rm u:"$user":rwx,g:"$group":rwx,d:u:"$user":rwx,d:g:"$group":rwx "$changed_dir"
}
# Monitor directory for changes
inotifywait -m -r -e modify,create,delete,move "$directory" |
while read path action file; do
# Call set_permissions function on change
set_permissions "$path"
done
Give the script executable rights using:
sudo chmod +x /usr/local/bin/monitor_fastcgi_permissions.sh
Now create a new service to run and (re)start the script:
sudo nano /etc/systemd/system/monitor-fastcgi-permissions.service
Paste the following content into that file:
[Unit]
Description=Monitor FastCGI cache permissions
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/monitor_fastcgi_permissions.sh
Restart=always
[Install]
WantedBy=multi-user.target
Restart the systemctl daemon and start and enable the service:
sudo systemctl daemon-reload
sudo systemctl start monitor-fastcgi-permissions
sudo systemctl enable monitor-fastcgi-permissions
7. After making these changes, restart both Nginx and PHP-FPM:
sudo systemctl restart nginx && sudo systemctl restart php*
I am using nginx helper 1.8.1 along with fastcgi caching and I noticed that the purging does not work. The problem here is that I have one PHP-FPM pool per website using another user account each. My nginx runs as www-data and so the files created by FastCGI caching also belongs to www-data. This means that the nginx helper can not delete (unlink) the cache file because that PHP-FPM runs as another user. Do you have any work around for that or any solution?
Why not always use the /purge URL instead of unlinking files?
Here would be the PHP-FPM error message just in case:
2014/06/16 12:42:23 [error] 28902#0: *41796 FastCGI sent in stderr: "PHP message: PHP Warning: opendir(/var/run/nginx-cache/mywebsite.com/): failed to open dir: Permission denied in /var/www/mywebsite.com/htdocs/wp-content/plugins/nginx-helper/purger.php on line 686" while reading response header from upstream, client: XX.XX.XX.XX, server: www.mywebsite.com, request: "GET /wp-admin/post.php?post=482&action=edit&message=1&nginx_helper_action=purge&nginx_helper_urls=all&_wpnonce=a382bcef22 HTTP/1.1", upstream: "fastcgi://unix:/var/lib/nginx/fastcgi/mywebsite.com.sock:", host: "www.mywebsite.com", referrer: "http://www.mywebsite.com/wp-admin/post.php?post=482&action=edit&message=1"