mdhiggins / sonarr-sma

Sonarr docker based on linuxserver/sonarr with SMA built in using python3
MIT License
37 stars 18 forks source link

Permission issue? or something else? #57

Closed BastionNtB closed 4 months ago

BastionNtB commented 4 months ago

Describe the bug When running postSonarr.sh from docker, on an import, I get this in the logs.

2024-02-27 17:38:30 - SonarrPostProcess - INFO - Sonarr extra script post processing started.
2024-02-27 17:38:30 - resources.readsettings - INFO - /usr/local/sma/venv/bin/python3
2024-02-27 17:38:30 - resources.readsettings - DEBUG - Loading default config file.
2024-02-27 17:38:30 - resources.readsettings - INFO - Loading config file /usr/local/sma/config/autoProcess.ini.
2024-02-27 17:38:30 - resources.readsettings - WARNING - Force-convert is true, so process-same-extensions is being overridden to true as well
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - environ({'SMA_UPDATE': 'false', 'NVIDIA_VISIBLE_DEVICES': 'all', 'sonarr_episodefile_sourcepath': '/nfs/shareserver/!Downloads/!sonarr/The.New.Look.S01E03.Nothing.But.Blue.Skies.1080p.ATVP.WEB-DL.DDP5.1.Atmos.H.264-FLUX/The.New.Look.S01E03.Nothing.But.Blue.Skies.1080p.ATVP.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv', 'sonarr_isupgrade': 'True', 'PUID': '3001', 'sonarr_episodefile_id': '39618', 'HOSTNAME': 'sonarr', 'S6_CMD_WAIT_FOR_SERVICES_MAXTIME': '0', 'LANGUAGE': 'en_US.UTF-8', 'sonarr_deletedpaths': '/nfs/shareserver/TV/The New Look (2024) [imdbid-tt18177528]/Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv', 'sonarr_episodefile_relativepath': 'Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv', 'sonarr_series_id': '581', 'sonarr_series_tvdbid': '416182', 'XDG_DATA_HOME': '/config/.config/share', 'XDG_CONFIG_HOME': '/config/.config', 'No_PreLoadSQLite': 'true', 'sonarr_episodefile_episodeids': '31599', 'sonarr_download_client': '', 'sonarr_episodefile_path': '/nfs/shareserver/TV/The New Look (2024) [imdbid-tt18177528]/Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv', 'UMASK': '0002', 'No_SQLiteFunctions': 'true', 'sonarr_episodefile_sourcefolder': '/nfs/shareserver/!Downloads/!sonarr/The.New.Look.S01E03.Nothing.But.Blue.Skies.1080p.ATVP.WEB-DL.DDP5.1.Atmos.H.264-FLUX', 'PWD': '/app/sonarr/bin', 'NVIDIA_DRIVER_CAPABILITIES': 'all', 'sonarr_episodefile_qualityversion': '1', 'SMA_HWACCEL': 'true', 'sonarr_episodefile_quality': 'WEBDL-1080p', 'HOME': '/root', 'LANG': 'en_US.UTF-8', 'sonarr_episodefile_seasonnumber': '1', 'sonarr_series_path': '/nfs/shareserver/TV/The New Look (2024) [imdbid-tt18177528]', 'sonarr_episodefile_episodeairdatesutc': '2/14/2024 6:46:00 AM', 'PGID': '1337', 'VIRTUAL_ENV': '/lsiopy', 'sonarr_episodefile_episodecount': '1', 'sonarr_series_title': 'The New Look', 'sonarr_deletedrelativepaths': 'Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv', 'sonarr_download_client_type': '', 'S6_VERBOSITY': '1', 'S6_STAGE2_HOOK': '/docker-mods', 'sonarr_series_tvmazeid': '60420', 'sonarr_episodefile_episodetitles': 'Nothing But Blue Skies', 'sonarr_episodefile_episodenumbers': '3', 'TERM': 'xterm', 'SMA_FFPROBE_PATH': 'ffprobe', 'sonarr_eventtype': 'Download', 'sonarr_episodefile_episodeairdates': '2024-02-14', 'SHLVL': '1', 'SMA_PATH': '/usr/local/sma', 'sonarr_series_imdbid': 'tt18177528', 'LD_LIBRARY_PATH': '/usr/local/cuda-11.4/lib64', 'SMA_RS': 'Sonarr', 'LSIO_FIRST_PARTY': 'true', 'sonarr_series_type': 'Standard', 'PATH': '/command:/lsiopy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'SMA_FFMPEG_PATH': 'ffmpeg', 'No_SQLiteXmlConfigFile': 'true', 'SONARR_BRANCH': 'main', 'sonarr_download_id': '', 'No_Expand': 'true', 'sonarr_episodefile_scenename': 'The.New.Look.S01E03.Nothing.But.Blue.Skies.1080p.ATVP.WEB-DL.DDP5.1.Atmos.H.264-FLUX', 'sonarr_episodefile_releasegroup': 'FLUX', '_': '/usr/local/sma/venv/bin/python3'})
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - Input file: /nfs/shareserver/TV/The New Look (2024) [imdbid-tt18177528]/Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv.
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - Original name: The.New.Look.S01E03.Nothing.But.Blue.Skies.1080p.ATVP.WEB-DL.DDP5.1.Atmos.H.264-FLUX.
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - TVDB ID: 416182.
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - Season: 1 episode: 3.
2024-02-27 17:38:30 - SonarrPostProcess - DEBUG - Sonarr series ID: 581.
2024-02-27 17:38:30 - resources.mediaprocessor - DEBUG - Invalid source, no data returned.
2024-02-27 17:38:30 - resources.mediaprocessor - INFO - File /nfs/shareserver/TV/The New Look (2024) [imdbid-tt18177528]/Season 01/The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv is not valid
2024-02-27 17:38:30 - SonarrPostProcess - ERROR - Processing returned False.
2024-02-27 17:38:30 - SonarrPostProcess - ERROR - Error processing file.
Traceback (most recent call last):
  File "/usr/local/sma/postSonarr.py", line 230, in <module>
    sys.exit(1)
SystemExit: 1

Additional context I've recently moved from a ubuntu hosted sonarr to a docker hosted sonarr, using your sonarr-sma image, and got it working with nvenc and all that. (Thanks for the help with that btw!)

This is the same NFS share that I used in the past, using the same user ID and group ID for sonarr that was used in the past as well. On the NFS share itself from truenas, I've mapped all users to root, and all groups to my shared group ID for all networked services. This usually allows any service to act and modify files as root without issue, however, I wonder if it's causing the issue shown above?

Other issues point to a permission issue, but looking at the permissions from the container, it shouldn't really be the cause... Here is an ls command from the sonarr container.

-rwxrwxrwx 1 root abc 4011613454 Feb 27 17:04 'The New Look (2024) - S01E03 - Nothing But Blue Skies [WEBDL-1080p][EAC3 Atmos 5.1][h264]-FLUX.mkv'

root is the user all NFS actions are mapped too on the share side, and abc is the group I've set sonarr to run with in the docker container. As it shows, it's extremely permissive, 777 across the board for all files, but it's reporting an issue that in the past has been reported as a permission issue. Is that the case here as well, do you think? What's confusing to me is, it's able to copy the file there, but then it says nothing can be done to it.

Any idea?

mdhiggins commented 4 months ago

Probably not a permission issue. Are your FFMPEG binaries actually working? Might be worth sharing info about your docker config and manually making sure ffprobe is functioning

BastionNtB commented 4 months ago

Ok, so strange. Apologies.. It seems between the time I tested the setup with a manual.py run and the time I actually tried to do a manual import in sonarr the docker image must have been updated by watchtower or SOMETHING. but it is actually working now. I had to redo the libssl gist steps from here: https://gist.github.com/joulgs/c8a85bb462f48ffc2044dd878ecaa786

Is it possible we could get these steps incorporated into the build process somehow? I'm a total noob at docker so, I don't know if I can simply just add it to the build context or if it requires a change in the dockerfile?

mdhiggins commented 4 months ago

If you put together a little summary of all the steps required to get nvidia working I can try and make a little script

BastionNtB commented 4 months ago

This is required for the build context in the compose:

    build:
      context: https://github.com/mdhiggins/sonarr-sma.git#build
      args:
        - ffmpeg_tag=5.1-nvidia2004
        - sonarr_tag=3.0.10

3.0.10 is the last version on Ubuntu for v3 sonarr.

The nvidia ffmpeg will change with time, but you can use whatever version, as long as nvidia2004 is used.

In the container, this command needs to be run

wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb && dpkg -i libssl1.1_1.1.0g-2ubuntu4_amd64.deb

or a this

wget -qO- http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb | sudo dpkg --install -

The environment variable needs to be passed as well:

    environment:
      - LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64

And then reservations need to be made for docker to allow access to the GPUs. If you do it with reversations like this, you don't need to change the runtime, environment variables.

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: ['compute','graphics','utility','video']

You can do it through another method if preferred. I prefer the other option as it's cleaner.

    runtime: nvidia # Expose NVIDIA GPUs
    environment:
      - NVIDIA_DRIVER_CAPABILITIES=compute,graphics,utility,video
      - NVIDIA_VISIBLE_DEVICES=all
    devices:
      - /dev/dri:/dev/dri # VAAPI/NVDEC/NVENC render nodes

Also, the script download location isn't set with the proper PID or GID when it sets up.

root@sonarr:/# ls -l /usr/local/sma/
total 172
drwxr-xr-x 2 root root  4096 Feb 26 21:23 autoprocess
drwxrwxr-x 2 abc  abc   4096 Feb 26 22:17 config
drwxr-xr-x 2 root root  4096 Feb 26 21:23 converter
-rwxr-xr-x 1 root root  8089 Feb 26 21:23 delugePostProcess.py
-rw-r--r-- 1 root root     0 Feb 26 21:23 __init__.py
-rw-r--r-- 1 root root  1069 Feb 26 21:23 license.md
-rwxr-xr-x 1 root root 23096 Feb 26 21:23 manual.py
-rwxr-xr-x 1 root root  9535 Feb 26 21:23 NZBGetPostProcess.py
drwxr-xr-x 2 root root  4096 Feb 26 21:23 post_process
-rwxr-xr-x 1 root root 12443 Feb 26 21:23 postRadarr.py
-rwxr-xr-x 1 root root  2934 Feb 26 21:23 postSickbeard.py
-rwxr-xr-x 1 root root 13141 Feb 26 21:23 postSonarr.py
-rw-r--r-- 1 root root    72 Feb  5 18:32 postSonarr.sh
-rwxr-xr-x 1 root root  9382 Feb 26 21:23 qBittorrentPostProcess.py
-rw-r--r-- 1 root root 18338 Feb 26 21:23 README.md
drwxr-xr-x 2 root root  4096 Feb 26 21:23 resources
-rwxr-xr-x 1 root root  4204 Feb 26 21:23 SABPostProcess.py
drwxr-xr-x 3 root root  4096 Feb 26 21:23 setup
-rw-r--r-- 1 root root  2002 Feb  5 18:32 update.py
-rwxr-xr-x 1 root root 10534 Feb 26 21:23 uTorrentPostProcess.py
drwxr-xr-x 5 root root  4096 Feb 26 21:23 venv

This causes Sonarr to be unable to run the files needed to encode files inside the container.

To fix this we can change the permissions, but I feel it should be included as part of the setup. because if the container is at all reset, these steps have to be taken each time. (Note: abc:abc is the lsio user name and group name that the process runs as.)

chown -R abc:abc /usr/local/sma/ && chmod -R 775 /usr/local/sma/

As for the autoprocess.ini, not a clue what's strictly required... And there's a lot of confusion to be had. When I run ffmpeg -hwaccels, I only get cuda, and when I run ffmpeg -decoders, I only get things listed with cuvid. But someone from another issue stated to use hwaccel-output-format = cuda:cuvid and also to throw in all the other accelerators, but I ended up not doing it. And then just adding 'cuda:#' for the hwdevices... hwdevices = cuda:0, cuda:1, I also added cuvid to the accelerator list because... I'm not sure if the output format thing does something with it or not... I dunno. it's all confusing to me.

[Converter]
ffmpeg = ffmpeg
ffprobe = ffprobe
threads = 0
hwaccels = cuda, cuvid
hwaccel-decoders = h264_cuvid, hevc_cuvid, av1_cuvid, vc1_cuvid, vp8_cuvid, vp9_cuvid, mjpeg_cuvid, mpeg1_cuvid, mpeg2_cuvid, mpeg4_cuvid, h264_cuda, hevc_cuda, av1_cuda, vc1_cuda, vp8_cuda, vp9_cuda, mjpeg_cuda, mpeg1_cuda, mpeg2_cuda, mpeg4_cuda
hwdevices = cuda:0, cuda:1
hwaccel-output-format = cuda:cuvid
mdhiggins commented 4 months ago

Alright so for libssl, I should be able to add something like this to the startup script

    if [[ -n "${SMA_LIBSSL}"]]; then
        filename=$(basename "${SMA_LIBSSL}")
        wget -P /tmp "${SMA_LIBSSL}"
        dpkg -i "/tmp/${filename}"
        rm "/tmp/${filename}"
    fi

And then you would set whichever URL to libssl you wanted to use to SMA_LIBSSL

The deploy variables will need to be set at the docker/docker-compose level so that's nothing that we need to bake in

Similarly, the environment variables probably need to be set at the user level, though, basically these 3 extra for nvidia specifically

    environment:
      - LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64
      - NVIDIA_DRIVER_CAPABILITIES=compute,graphics,utility,video
      - NVIDIA_VISIBLE_DEVICES=all

What I could do to simplify this is also include this in the startup script

if [ -f /usr/bin/apt ]; then
    ## Ubuntu
    ...
    if [[ "${SMA_NVIDIA}" == "true" ]]; then
        export NVIDIA_DRIVER_CAPABILITIES="compute,graphics,utility,video"
        export NVIDIA_VISIBLE_DEVICES="all"
        if [[ -z "${LD_LIBRARY_PATH}" ]]; then
            export LD_LIBRARY_PATH="/usr/local/cuda-11.4/lib64"
        fi
        if [[ -z "${SMA_LIBSSL}" ]]; then
            export SMA_LIBSSL="http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb"
        fi
    fi
    if [[ -n "${SMA_LIBSSL}"]]; then
        filename=$(basename "${SMA_LIBSSL}")
        wget -P /tmp "${SMA_LIBSSL}"
        dpkg -i "/tmp/${filename}"
        rm "/tmp/${filename}"
    fi
    ...
fi

This will include everything and provide default values for SMA_LIBSSL but still allow that value to be overriden, and the same for the LD_LIBRARY_PATH

Now, one thing I wanted to check, the permissions issue you bring up should not be the case, the last few lines of the start up script have been set for a long time with

# permissions
chown -R abc:abc ${SMA_PATH}
chmod -R 775 ${SMA_PATH}/*.sh

My concern is maybe the startup script isn't running at all? I wonder if the build tag you're using is so old it doesn't use s6v3 and its not getting run due to the old directory structure? Not sure if you can pop in to your container and check to see

mdhiggins commented 4 months ago

I did some digging myself and it looks like that's probably the case- the s6v3 update happened with the migration to alpine, so the startup scripts are probably not being executed, I'll probably have to make a separate build tag with the s6v2 directory structure to be used if you're doing it in conjunction with the old sonarr ubuntu builds

mdhiggins commented 4 months ago

Alright I put up a new build tag, build-s6v2 which includes the updates to the startup script, give it a try

BastionNtB commented 4 months ago

That sounds awesome! Thank you for all the work you put into this!

So, forgive my ignorance. How do I use the new build tag with the build context?

mdhiggins commented 4 months ago

Just modify the context line

    build:
      context: https://github.com/mdhiggins/sonarr-sma.git#build-s6v2
      args:
        - ffmpeg_tag=5.1-nvidia2004
        - sonarr_tag=3.0.10
BastionNtB commented 4 months ago

Will give it a try, it's encoding a backlog at the moment. Btw, are my settings set correctly here? I have 2 GPUs, but it's only using the second one, and I kinda thought maybe it would use both! But not sure if ffmpeg can do that though.

hwdevices = cuda:0, cuda:1
mdhiggins commented 4 months ago

Pretty sure it can only use one at a time

mdhiggins commented 4 months ago

Will give it a try, it's encoding a backlog at the moment. Btw, are my settings set correctly here? I have 2 GPUs, but it's only using the second one, and I kinda thought maybe it would use both! But not sure if ffmpeg can do that though.


hwdevices = cuda:0, cuda:1

This just gets resolved to a Python dictionary so you just overwrite the key when it's the same. You'll only end up with {cuda: 1}

BastionNtB commented 4 months ago

So, I gave it a try today. Took down the stack, removed the existing image, repulled and built the new one using the build-s6v2 tag... It doesn't seem like it's doing anything different.

ffmpeg doesn't work without installing the libssl1.1 thing, and the /usr/local/sma is still owned by root:root

I wanted to see if I could verify that the scripts were updated, but when I went to /etc/s6-overlay/s6-rc.d/ there wasn't any sma folder in there. Not sure if that's normal, docker is new to me, but this s6 overlay thing is alien.

root@sonarr:/etc/s6-overlay/s6-rc.d# ls -l
total 76
drwxr-xr-x 3 root root 4096 Dec 19 23:25 ci-service-check
drwxr-xr-x 1 root root 4096 Dec 19 23:25 init-adduser
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-config
drwxr-xr-x 1 root root 4096 Dec 22 23:31 init-config-end
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-crontab-config
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-custom-files
drwxr-xr-x 2 root root 4096 Dec 19 23:25 init-envfile
drwxr-xr-x 2 root root 4096 Dec 19 23:25 init-migrations
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-mods
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-mods-end
drwxr-xr-x 1 root root 4096 Dec 19 23:25 init-mods-package-install
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-os-end
drwxr-xr-x 3 root root 4096 Dec 19 23:25 init-services
drwxr-xr-x 3 root root 4096 Dec 22 23:31 init-sonarr-config
drwxr-xr-x 3 root root 4096 Dec 19 23:25 svc-cron
drwxr-xr-x 4 root root 4096 Dec 22 23:31 svc-sonarr
drwxr-xr-x 1 root root 4096 Dec 22 23:31 user
drwxr-xr-x 1 root root 4096 Dec 19 23:25 user2
mdhiggins commented 4 months ago

For s6v2, the startup script is going to be located at /etc/cont-init.d/90-sma-mod

If you look at the docker logs when the container starts you should see things indicating it ran, though it looks like I had a syntax error I didn't catch, just fixed that now so try a fresh build and then look at the container logs and see if you see that the startup script executed; if it didn't libssl definitely won't be there

You should see

[90-sma-config] SMA config initialized

plus maybe a few more entries depending on what its doing

Try again with the fixes I just pushed

BastionNtB commented 4 months ago

Ugh, docker logs. Can't believe I forgot about those lol!

Here you go!

[migrations] started
[migrations] no migrations found
/etc/cont-init.d/90-sma-config: line 3: echoprefix: command not found
 SMA config initialized
/etc/cont-init.d/90-sma-config: line 46: syntax error in conditional expression: unexpected token `;'
/etc/cont-init.d/90-sma-config: line 46: syntax error near `;'
/etc/cont-init.d/90-sma-config: line 46: `    if [[ -n "${SMA_LIBSSL}"]]; then'
mdhiggins commented 4 months ago

You're using an old version, that's been fixed

BastionNtB commented 4 months ago

Ok, weird. I redeleted the image and did another pull and it's working now... Sorry about that!

Thank you for all the help! :)

mdhiggins commented 4 months ago

New variables and everything worked ok? Going to update the readme if it's good to go and migrate the same changes to the other containers

BastionNtB commented 4 months ago

Hey it looked like it all worked. I would just not forget to mention that if you use the environment variables you'll also need to include the runtime portion. But other than that, I think this will work. Did lsio also do the same for radarr? I haven't even tried it yet.

mdhiggins commented 4 months ago

I think Radarr has been on alpine for over a year. I'll put together an updated readme next time I have off and link it back here for review.

mdhiggins commented 4 months ago

Merged these changes over to https://github.com/mdhiggins/radarr-sma and https://github.com/mdhiggins/sma-mod

Also added a build-s6v2 tag for radarr-sma

Any chance you would want to write up a wiki type page for the configuration stuff you used so I can add it for other people looking to do nvidia hwaccel?

BastionNtB commented 4 months ago

I might be able to help you with that, how do I contribute to it?

mdhiggins commented 4 months ago

You could just post it here and I'll format it and move it over