Closed bttd closed 5 months ago
Can you provide your application config (redact login info) and compose file or run config?
Unfortunately, df -h
on your host machine is essentially useless. I'm not 100% familiar with mergerfs, but if it doesn't propagate it into the container correctly, Janitorr won't read it correctly either.
Please execute it from within the container, so we can narrow down the problem.
If you're familiar with Java/Kotlin, you could also reproduce the code used on the host to read free space.
In JConsole this would be akin to
new java.io.File("/dir/here").getFreeSpace() // see if it matches bytes from df
Without investigating further, I'd wager my bet on mergerfs reporting things wrong to your container.
Hi,
I try:
docker exec -it janitorr df -h /mnt/DATA
And I get this output:
Filesystem Size Used Avail Use% Mounted on DATA 3.6T 3.1T 345G 91% /mnt/DATA
I still get this in logs: [ scheduling-1] c.g.s.j.cleanup.AbstractCleanupSchedule : Free disk space: 14.474077584301218%
Then please provide your compose file and application.yml (redact personal info). As you can see, all Janitorr does is create a virtual file for the folder you supply via:
file-system:
free-space-check-dir: "/"
If free-space-check-dir
is not set to /mnt/DATA
, that's your problem right there.
Hi!
Docker compose:
janitorr:
container_name: janitorr
image: ghcr.io/schaka/janitorr:latest
ports:
- "8978:8978"
volumes:
- ./janitorr/config:/config
- /mnt/DATA/:/mnt/DATA
application.yaml
server:
port: 8978
# File system access (same mapping as Sonarr, Radarr and Jellyfin) is required to delete TV shows by season and create "Leaving Soon" collections in Jellyfin
# Currently, Jellyfin does not support an easy way to add only a few seasons or movies to a collection, we need access to temporary symlinks
# Additionally, checks to prevent deletion on currently still seeding media currently require file system access as well
file-system:
access: true
validate-seeding: false # validates seeding by checking if the original file exists and skips deletion - turning this off will send a delete to the *arrs even if a torrent may still be active
leaving-soon-dir: "/mnt/DATA/leaving-soon" # A directory this container can write to and Jellyfin can find under the same path - this will contain new folders with symlinks to files for Jellyfin's "Leaving Soon" collections
from-scratch: true # Clean up entire "Leaving Soon" directory and rebuild from scratch - this can help with clearing orphaned data - turning this off can save resources (less writes to drive)
free-space-check-dir: "/mnt/DATA/" # This is the default directory Janitorr uses to check how much space is left on your drives. By default, it checks the entire root - you may point it at a specific folder
application:
dry-run: true
leaving-soon: 14d # 14 days before a movie is deleted, it gets added to a "Leaving Soon" type collection (i.e. movies that are 76 to 89 days old)
exclusion-tag: "keep" # Set this tag to your movies or TV shows in the *arrs to exclude media from being cleaned up
media-deletion:
enabled: true
movie-expiration:
# Percentage of free disk space to expiration time - if the highest given number is not reached, nothing will be deleted
# If filesystem access is not given, disk percentage can't be determined. As a result, Janitorr will always choose the largest expiration time.
5: 180d
10: 365d
season-expiration:
5: 180d
10: 365d
tag-based-deletion:
enabled: false
minimum-free-disk-percent: 100
schedules:
- tag: 5 - demo
expiration: 30d
- tag: 10 - demo
expiration: 7d
clients:
sonarr:
enabled: true
url:
api-key:
delete-empty-shows: true # If a show that was "touched" by Janitorr contains no files and has no monitored seasons at all, it will get deleted as part of orphan cleanup
radarr:
enabled: true
url:
api-key:
jellyfin:
enabled: true
url:
api-key:
username:
password:
delete: true # Jellyfin setup is required for JellyStat. However, if you don't want Janitorr to send delete requests to the Jellyfin API, disable it here
jellyseerr:
enabled: true
url:
api-key:
match-server: false # Enable if you have several Radarr/Sonarr instances set up in Jellyseerr. Janitorr will match them by the host+port supplied in their respective config settings.
jellystat:
enabled: true
url:
api-key:
Assuming that you are loading the application.yml file correctly, it seems to be an actual issue where Java misinterprets free space or at least differently to df -h
when using mergerfs.
This is low priority and I can't speak to when I'll be able to debug this, as it isn't my use case at all. I'll have to run a VM with mergerfs to reproduce this somehow.
I think it's load correctly, if I change to path, to /, it show the correct disk usage of my system disk.
If I can assist you somehow, to make it easier to debug it, please let me know. (I can run test scripts for example on the host, or inside the docker, or I can give you some kind of access to a container mounted with mergerfs)
If you can at all, run a Docker image where you have access to Java (21) and/or JConsole.
Run the code example given above on a mapped /mnt/DATA
path and see if it's correct.
You can also use getUnallocatedSpace() and see if the result is different.
Unfortunately, neither of these is documented very well, so we're going to have to trial and error this.
Alternatively, you can run:
Files.getFileStore(Path.of("/mnt/DATA")).getUnallocatedSpace()
Files.getFileStore(Path.of("/mnt/DATA")).getUsableSpace()
Edit: I just found this: https://stackoverflow.com/questions/68499043/jdk11-getfreespace-and-gettotalspace-from-file-is-not-matching-df
It refers to how df doesn't match freeSpace. I'm going to use availableSpace on the develop branch, so please test this image and see if that fixes the issue for you.
Edit2: df -h
is simply misleading. In your example, 3.1T/3.6T is 86% in use, meaning 14% free.
Hi,
I have a mergerfs volume mounted at /mnt/DATA on my host computer, and its mounted to janitorr docker at /mnt/DATA, and I set the disk space check path to /mnt/DATA.
I have around 9% free space on that volume, but janitorr show in logs: 14.474077584301218%
This is my read out from the host ssh:
df -h /mnt/DATA
DATA 3,6T 3,1T 345G 91% /mnt/DATA
I don't have any volume with this 14.474077584301218% free space according to my df -h on host.