mitchellkrogza / nginx-ultimate-bad-bot-blocker

Nginx Block Bad Bots, Spam Referrer Blocker, Vulnerability Scanners, User-Agents, Malware, Adware, Ransomware, Malicious Sites, with anti-DDOS, Wordpress Theme Detector Blocking and Fail2Ban Jail for Repeat Offenders
Other
3.81k stars 472 forks source link

[BUG] Nginx logs shows permissions denied error to /etc/nginx/conf.d/globalblacklist.conf #567

Closed Danrancan closed 2 months ago

Danrancan commented 2 months ago

Describe the bug

I am successfully running Nginx Ultimate Bad bot blocker, but am looking at an error in my Nginx logs. Here is the error:

2024/04/17 06:00:09 [info] 634373#634373: pagespeed: rollback gzip, explicit configuration in /etc/nginx/nginx.conf:164
2024/04/17 06:00:09 [emerg] 634373#634373: open() "/etc/nginx/conf.d/globalblacklist.conf" failed (13: Permission denied) in /etc/nginx/nginx.conf:205

I'm not sure if this error is Nginx itself unable to read globalblacklist.conf, or if it is pagespeed unable to read globalblacklist.conf. Can you answer this? As a note, the pagesped module was turned off when this error was encountered. Also, it seems changing globalblacklist.conf with chmod to loosen permissions, reverts itself to stricter permissions after an update to bad bot blocker. Can you fix bad blocker so that it loosens permissions after each update so this issue is fixed? Are there any other workarounds you can suggest?

To Reproduce

Just install and run Nginx Bad Bot blocker.

Expected behavior

The Permission denied error shouldn't appear.

Screenshots

N/A

Copy of nginx.conf


#user www-data www-data;
user www-data;
worker_processes auto;
pid /run/nginx.pid;
load_module modules/ngx_http_modsecurity_module.so;
load_module modules/ngx_pagespeed.so;
#load_module modules/ngx_http_cache_purge_module.so;
load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;

events {
    worker_connections 1024;
    # multi_accept on;
    # epoll: This is an efficient method of processing connections available on Linux 2.6+.
    # The method is similar to the FreeBSD kqueue. There is also the additional directive
    # epoll_events. This specifies the number of events that NGINX will pass to the kernel.
    # The default value for this is 512.
    use epoll;
}

http {

    ##
    # MOD SECURITY
    ##

    modsecurity on;
    modsecurity_rules_file /etc/nginx/modsec/main.conf;

    ##
    # SSL
    ##

    ssl_session_cache shared:SSL:10m; #SSL session cache
    ssl_session_timeout 10m;
    ssl_prefer_server_ciphers on;

    ##
    # UPLOADS
    ##

    client_max_body_size 300M;
    client_body_buffer_size 300M;
    fastcgi_read_timeout 400;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    types_hash_max_size 4096;
    server_tokens off;
    server_names_hash_bucket_size 256; # Change to 64 if you have long server names
    server_name_in_redirect off;

    ##
    # HARDENING:
    ##

    # Pestmeester.nl # Change to 10 to really harden.
    client_header_timeout 10;
    client_body_timeout 10;
    keepalive_timeout 70;
    send_timeout 10;

    ##
    # NGINX AMPLIFY: Fix Missing HTTP header definitions in proxy_pass
    ##

    # Best practice is to configure a clear set of headers with proxy_pass.
    # The Host header is always important. Add the following to your nginx configuration:
    proxy_set_header Host $host;
    proxy_headers_hash_max_size 4096;
    proxy_headers_hash_bucket_size 4096;
    # and optionally the following:
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
    proxy_set_header X-Forwarded-Protocol $scheme;

    ##
    # LOGGING
    ##

    # Nginx default log paths
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log notice;

    # Exclude Your Own IP Address in Nginx Access Log (for Amplify-agent)
    map $remote_addr $log_ip {
        "123.444.567.890" 0;
        default 1;
    }

    # NETDATA:
    # Create a custom Nginx log format called netdata that includes information about
    # request_time, and upstream_response_time, measured in seconds with millisecond resolution.
    log_format netdata '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $body_bytes_sent '
    '$request_length $request_time $upstream_response_time '
    '"$http_referer" "$http_user_agent"';

    # AMPLIFY:
    # Create a custom Nginx log format called apm for nginx amplify
    log_format apm '"$time_local" client=$remote_addr '
    'method=$request_method request="$request" '
    'request_length=$request_length '
    'status=$status bytes_sent=$bytes_sent '
    'body_bytes_sent=$body_bytes_sent '
    'referer=$http_referer '
    'user_agent="$http_user_agent" '
    'upstream_addr=$upstream_addr '
    'upstream_status=$upstream_status '
    'request_time=$request_time '
    'upstream_cache_status="$upstream_cache_status" '
    'upstream_response_time=$upstream_response_time '
    'upstream_connect_time=$upstream_connect_time '
    'upstream_header_time=$upstream_header_time';
    # Use Syslog for amplify metric collection
    #access_log syslog:server=127.0.0.1:12000,tag=amplify,severity=info; main_ext;

    # STUB STATUS (Netdata & Amplify)
    server {
        listen 127.0.0.1:80 default_server;
        server_name 127.0.0.1;
        location /nginx_status {
            stub_status on;
        allow 127.0.0.1;
        deny all;
        }
    }

    ##
    # INCLUDES
    ##

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # COMPRESSION
    ##

    # BROTLI:
    # The 'brotli off|on' value enables or disbles dynameic or on the fly compression of the content.
    brotli on;
    # The 'brotli_static on' value enables the Nginx server to check if the pre-compressed files with the .br extensions
    # exists or not. The always value allows the server to send pre-compressed content without confirming if the browser
    # supports it or not. Since Brotli is resource-intensive, this modules is best suited to reduse the bottleneck situations.
    #brotli_static      on;
    brotli_static       always;
    # The brotli_comp_level directive sets the dynamic compression quality. It can range from 0 to 11.
    brotli_comp_level       7;
    #brotli_window      512k;
    # Configure a minimum length in order to have the requst compressed, determined by the Content-Length field in the HTTP headers.
    brotli_min_length       20; # or try 21
    # Enable dynamic compression for specific MIME types, whereas text/html respnosese are always compressed.
    brotli_types application/atom+xml application/javascript application/json application/rss+xml
    application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype
    application/x-font-ttf application/x-javascript application/xhtml+xml application/xml
    font/eot font/opentype font/otf font/truetype image/svg+xml image/vnd.microsoft.icon
    image/x-icon image/x-win-bitmap text/css text/javascript text/plain text/xml;

    # G-ZIP
    gzip off;
    # Linuxbabe turns gzip_vary on, but we turn it off here because when enabling it, you may have problems clearing the cache.
    #gzip_vary      off;
    #gzip_vary      on;
    #gzip_proxied       any;
    #gzip_min_length    1000;
    #gzip_comp_level    6;
    #gzip_buffers       16 8k;
    #gzip_http_version  1.1;
    #gzip_types application/json application/x-javascript application/javascript application/atom+xml
    #application/rss+xml application/vnd.ms-fontobject application/x-font-ttf
    #application/x-web-app-manifest+json application/xhtml+xml application/xml;

    ##
    # FASTCGI CACHE
    ##

    # LINUXBABE
    # If you installed multiple instances of WordPress site on the same server, then you can create a separate FastCGI cache for each WordPress site.
    # Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness.
    # In addition, all active keys and information about data are stored in a shared memory zone, whose name and size are configured by the keys_zone
    # parameter. One megabyte zone can store about 8 thousand keys.
    # By default, inactive is set to 10 minutes.
    #fastcgi_cache_path /usr/share/nginx/fastcgi_cache levels=1:2 keys_zone=phpcache:100m max_size=15g inactive=12h use_temp_path=off;
    #
    #fastcgi_cache_path /var/www/fastcgi_cache/danran.rocks levels=1:2 keys_zone=danran.rocks:100m max_size=150m inactive=12h use_temp_path=off; # On Disk
    fastcgi_cache_path /var/www/cache/fastcgi_cache/danran.rocks levels=1:2 keys_zone=danran.rocks:100m max_size=150m inactive=24h use_temp_path=off; # In Memory (ramdisk)
    #
    #fastcgi_cache_path /var/www/fastcgi_cache/oddcake.net levels=1:2 keys_zone=oddcake.net:100m max_size=150m inactive=12h use_temp_path=off; # On Disk
    fastcgi_cache_path /var/www/cache/fastcgi_cache/oddcake.net levels=1:2 keys_zone=oddcake.net:100m max_size=150m inactive=24h use_temp_path=off; # In Memory (ramdisk)
    #
    #fastcgi_cache_path /var/www/fastcgi_cache/mcmo.is levels=1:2 keys_zone=mcmo.is:100m max_size=700m inactive=12h use_temp_path=off; # On Disk
    fastcgi_cache_path /var/www/cache/fastcgi_cache/mcmo.is levels=1:2 keys_zone=mcmo.is:100m max_size=700m inactive=24h use_temp_path=off; # In Memory (ramdisk)
    #
    fastcgi_cache_key "$scheme$request_method$host$request_uri";

    ##
    # Virtual Hosts
    ##

    #include /etc/nginx/botblocker.d/*.conf;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Copy of vhost / website / host .conf file

N/A

Server (please complete the following information):


5.15.0-1050-raspi #53-Ubuntu SMP PREEMPT Thu Mar 21 10:02:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

nginx/1.25.4

2024/04/17 06:00:09 [info] 634373#634373: pagespeed: rollback gzip, explicit configuration in /etc/nginx/nginx.conf:164
2024/04/17 06:00:09 [emerg] 634373#634373: open() "/etc/nginx/conf.d/globalblacklist.conf" failed (13: Permission denied) in /etc/nginx/nginx.conf:205

Additional information

N/A

mitchellkrogza commented 2 months ago

This is a Pagespeed error. I have not used Pagespeed on any sites for years, it sucks. Remove it, not needed and will not speed up anything.