lancachenet / monolithic

A monolithic lancache service capable of caching all CDNs in a single instance
https://hub.docker.com/r/lancachenet/monolithic
Other
725 stars 73 forks source link

FORCE_PERMS_CHECK = false does not stop Full Permissions Check at start up #182

Open jennec opened 8 months ago

jennec commented 8 months ago

Describe the issue you are having

FORCE_PERMS_CHECK = false does not stop Full Permissions Check at start up after initially enabling it.
Removing the variable from the ENV file or directly from the container does not stop the check from happening.

How are you running the container(s)

version: '2'
x-restart-policy: &restart-policy "always"
services:
  dns:
    image: lancachenet/lancache-dns:latest
    env_file: .env
    restart: *restart-policy
    ports:
      - ${DNS_BIND_IP}:53:53/udp
      - ${DNS_BIND_IP}:53:53/tcp

## HTTPS requests are now handled in monolithic directly
## you could choose to return to sniproxy if desired
#
#  sniproxy:
#    image: lancachenet/sniproxy:latest
#    env_file: .env
#    restart: *restart-policy
#    ports:
#      - 443:443/tcp

  monolithic:
    image: lancachenet/monolithic:latest
    env_file: .env
    restart: *restart-policy
    ports:
      - ${LANCACHE_IP}:80:80/tcp
      - ${LANCACHE_IP}:443:443/tcp
    volumes:
      - ${CACHE_ROOT}/cache:/data/cache
      - ${CACHE_ROOT}/logs:/data/logs

ENV File

## See the "Settings" section in README.md for more details

## Set this to true if you're using a load balancer, or set it to false if you're using seperate IPs for each service.
## If you're using monolithic (the default), leave this set to true
USE_GENERIC_CACHE=true

## IP addresses that the lancache monolithic instance is reachable on
## Specify one or more IPs, space separated - these will be used when resolving DNS hostnames through lancachenet-dns. Multiple IPs can >
## Note: This setting only affects DNS, monolithic and sniproxy will still bind to all IPs by default
LANCACHE_IP=192.168.10.31

## IP address on the host that the DNS server should bind to
DNS_BIND_IP=192.168.10.30

## DNS Resolution for forwarded DNS lookups
UPSTREAM_DNS=192.168.10.1

## Storage path for the cached data
## Note that by default, this will be a folder relative to the docker-compose.yml file
CACHE_ROOT=/media/lancache

## Change this to customise the size of the disk cache (default 2000g)
## If you have more storage, you'll likely want to increase this
## The cache server will prune content on a least-recently-used basis if it
## starts approaching this limit.
## Set this to a little bit less than your actual available space
CACHE_DISK_SIZE=4000g

## Change this to allow sufficient index memory for the nginx cache manager (default 500m)
## We recommend 250m of index memory per 1TB of CACHE_DISK_SIZE
CACHE_INDEX_SIZE=500m

## Change this to limit the maximum age of cached content (default 3650d)
CACHE_MAX_AGE=3650d

## Set the timezone for the docker containers, useful for correct timestamps on logs (default Europe/London)
## Formatted as tz database names. Example: Europe/Oslo or America/Los_Angeles
TZ=Europe/London

FORCE_PERMS_CHECK = false

DNS Configuration

DHCP Client are configured with the lancache DNS server.  DNS resolution is not the issue

Output of container(s)

Executing hook /hooks/entrypoint-pre.d/20_perms_check.sh
Running fast permissions check
Doing full checking of permissions (This WILL take a long time on large caches)...
VibroAxe commented 8 months ago

Two possible issues here, either the fast permissions check is finding a permission issue so it's try to run the full check to rectify (are you using network storage but any chance) or you've not recreated the docker containers after changing the value in the env file. Did you run docker compose up -d?

jennec commented 8 months ago

I have a SMB share mounted to a local folder for the cache location.
I’ve destroyed and recreated the containers each time I made a change to the ENV file or to the docker-compose file.

I only enabled this setting because I initially had permissions problems with the mount. Prior to that it was not doing a full permissions check. Once I sorted the permissions issue it this problem started.

Does monolith log why it decided to do the full perm check anywhere when it starts up?