linuxserver / docker-radarr

GNU General Public License v3.0
640 stars 103 forks source link

4.1.0.6175 - No UI available on port, no error messages being recorded #180

Closed acurrington closed 1 year ago

acurrington commented 2 years ago

linuxserver.io


Expected Behavior

Should be able to access the web UI

Current Behavior

Get a message saying "This site can't be reached"

Steps to Reproduce

After upgrading to 4.1.0.6175... Start Docker container for Radarr Wait for log to show Radarr container started Try to access port 7878 Nothing appears to be listening, even though image still appears to be running ok. No changes to anything Docker or IP related other than pulling the new image. Before this version, the UI showed without an issue.

Environment

"Latest" Image was pulled from Linuxserver.io

Command used to create docker container (run/create/compose/screenshot)

Pulled within PortainerCE

Docker logs

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service 00-legacy: starting s6-rc: info: service 00-legacy successfully started s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting cont-init: info: running /etc/cont-init.d/01-envfile cont-init: info: /etc/cont-init.d/01-envfile exited 0 cont-init: info: running /etc/cont-init.d/01-migrations [migrations] started [migrations] no migrations found cont-init: info: /etc/cont-init.d/01-migrations exited 0 cont-init: info: running /etc/cont-init.d/02-tamper-check cont-init: info: /etc/cont-init.d/02-tamper-check exited 0 cont-init: info: running /etc/cont-init.d/10-adduser


      _         ()
     | |  ___   _    __
     | | / __| | |  /  \
     | | \__ \ | | | () |
     |_| |___/ |_|  \__/

Brought to you by linuxserver.io

To support the app dev(s) visit: Radarr: https://opencollective.com/radarr

To support LSIO projects visit: https://www.linuxserver.io/donate/

GID/UID

User uid: 1000 User gid: 1000

cont-init: info: /etc/cont-init.d/10-adduser exited 0 cont-init: info: running /etc/cont-init.d/30-config cont-init: info: /etc/cont-init.d/30-config exited 0 cont-init: info: running /etc/cont-init.d/90-custom-folders cont-init: info: /etc/cont-init.d/90-custom-folders exited 0 cont-init: info: running /etc/cont-init.d/99-custom-files [custom-init] no custom files found exiting... cont-init: info: /etc/cont-init.d/99-custom-files exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun radarr (no readiness notification) s6-rc: info: service legacy-services successfully started s6-rc: info: service 99-ci-service-check: starting [ls.io-init] done. s6-rc: info: service 99-ci-service-check successfully started [Info] Bootstrap: Starting Radarr - /app/radarr/bin/Radarr - Version 4.1.0.6175 [Debug] Bootstrap: Console selected [Info] AppFolderInfo: Data directory is being overridden to [/config] [Debug] Microsoft.Extensions.Hosting.Internal.Host: Hosting starting [Info] AppFolderInfo: Data directory is being overridden to [/config] [Info] MigrationController: Migrating data source=/config/radarr.db;cache size=-20000;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 [Info] MigrationController: Migrating data source=/config/logs.db;cache size=-20000;datetimekind=Utc;journal mode=Wal;pooling=True;version=3

[Info] CommandExecutor: Starting 2 threads for tasks. [Info] Microsoft.Hosting.Lifetime: Application started. Press Ctrl+C to shut down. [Info] Microsoft.Hosting.Lifetime: Hosting environment: Production [Info] Microsoft.Hosting.Lifetime: Content root path: /app/radarr/bin

github-actions[bot] commented 2 years ago

Thanks for opening your first issue here! Be sure to follow the bug or feature issue templates!

Roxedus commented 2 years ago

Post the configuration you set in portainer

acurrington commented 2 years ago
image: linuxserver/radarr:latest
container_name: radarr
environment:
  - PUID=1000
  - PGID=1000
  - TZ=Pacific/Auckland
volumes:
  - /path/to/my/data:/config
  - /path/to/my/movies:/movies
ports:
  - 7878:7878
restart: unless-stopped
acurrington commented 2 years ago

After removing the docker container, renaming the persistent config directory, and re-deploying the container, I now see the following

[Info] FluentMigrator.Runner.MigrationRunner: 192: add_on_delete_to_notifications migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0587345s [Info] FluentMigrator.Runner.MigrationRunner: 194: add_bypass_to_delay_profile migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] add_bypass_to_delay_profile: Starting migration of Log DB to 194 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (194, '2022-07-12T09:29:02', 'add_bypass_to_delay_profile') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 194: add_bypass_to_delay_profile migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.112132s [Info] FluentMigrator.Runner.MigrationRunner: 195: update_notifiarr migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] update_notifiarr: Starting migration of Log DB to 195 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (195, '2022-07-12T09:29:02', 'update_notifiarr') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 195: update_notifiarr migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0209055s [Info] FluentMigrator.Runner.MigrationRunner: 196: legacy_mediainfo_hdr migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] legacy_mediainfo_hdr: Starting migration of Log DB to 196 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (196, '2022-07-12T09:29:02', 'legacy_mediainfo_hdr') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 196: legacy_mediainfo_hdr migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0309748s [Info] FluentMigrator.Runner.MigrationRunner: 197: rename_blacklist_to_blocklist migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] rename_blacklist_to_blocklist: Starting migration of Log DB to 197 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (197, '2022-07-12T09:29:02', 'rename_blacklist_to_blocklist') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 197: rename_blacklist_to_blocklist migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0670738s [Info] FluentMigrator.Runner.MigrationRunner: 198: add_indexer_tags migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] add_indexer_tags: Starting migration of Log DB to 198 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (198, '2022-07-12T09:29:02', 'add_indexer_tags') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 198: add_indexer_tags migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0657246s [Info] FluentMigrator.Runner.MigrationRunner: 199: mediainfo_to_ffmpeg migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] mediainfo_to_ffmpeg: Starting migration of Log DB to 199 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (199, '2022-07-12T09:29:02', 'mediainfo_to_ffmpeg') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 199: mediainfo_to_ffmpeg migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0657679s [Info] FluentMigrator.Runner.MigrationRunner: 200: cdh_per_downloadclient migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] cdh_per_downloadclient: Starting migration of Log DB to 200 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (200, '2022-07-12T09:29:02', 'cdh_per_downloadclient') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 200: cdh_per_downloadclient migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.1101048s [Info] FluentMigrator.Runner.MigrationRunner: 201: migrate_discord_from_slack migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] migrate_discord_from_slack: Starting migration of Log DB to 201 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (201, '2022-07-12T09:29:02', 'migrate_discord_from_slack') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 201: migrate_discord_from_slack migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0269028s [Info] FluentMigrator.Runner.MigrationRunner: 202: remove_predb migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] remove_predb: Starting migration of Log DB to 202 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (202, '2022-07-12T09:29:02', 'remove_predb') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 202: remove_predb migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0591173s [Info] FluentMigrator.Runner.MigrationRunner: 203: add_on_update_to_notifications migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] add_on_update_to_notifications: Starting migration of Log DB to 203 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (203, '2022-07-12T09:29:02', 'add_on_update_to_notifications') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 203: add_on_update_to_notifications migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0287404s [Info] FluentMigrator.Runner.MigrationRunner: 204: ensure_identity_on_id_columns migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] ensure_identity_on_id_columns: Starting migration of Log DB to 204 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (204, '2022-07-12T09:29:02', 'ensure_identity_on_id_columns') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 204: ensure_identity_on_id_columns migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0324916s [Info] FluentMigrator.Runner.MigrationRunner: 205: download_client_per_indexer migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] download_client_per_indexer: Starting migration of Log DB to 205 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (205, '2022-07-12T09:29:02', 'download_client_per_indexer') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 205: download_client_per_indexer migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0658128s [Info] FluentMigrator.Runner.MigrationRunner: 206: multiple_ratings_support migrating [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Beginning Transaction [Info] multiple_ratings_support: Starting migration of Log DB to 206 [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: INSERT INTO "VersionInfo" ("Version", "AppliedOn", "Description") VALUES (206, '2022-07-12T09:29:03', 'multiple_ratings_support') [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Committing Transaction [Info] FluentMigrator.Runner.MigrationRunner: 206: multiple_ratings_support migrated [Info] FluentMigrator.Runner.MigrationRunner: => 0.0665604s

[Info] UpdaterConfigProvider: Update mechanism BuiltIn not supported in the current configuration, changing to Docker. [Info] ProfileService: Setting up default quality profiles [Info] CommandExecutor: Starting 2 threads for tasks. [Info] Microsoft.Hosting.Lifetime: Application started. Press Ctrl+C to shut down. [Info] Microsoft.Hosting.Lifetime: Hosting environment: Production [Info] Microsoft.Hosting.Lifetime: Content root path: /app/radarr/bin

However, I still can't connect to port 7878

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

myna-me commented 1 year ago

Smiliar issues here. There are no error messages, but there is no UI available. Tried the same Docker commands. Not sure if this is the same as OP, but this occurs in a synology box. I tried it in another box and LATEST, or version 4.1.0 work fine. However in this box it does not. But if i run version 4.0.5, then it runs with no problems. A couple of other things i tried:

in all instances, the /config directory correctly gets the owner/group setup, but 2 directories get created inside: custom-cont-init.d and custom-services.d, both under ROOT user. The container stays running, but i get no GUI. just "The connection was reset" msg. Other than that, here is the log:

[custom-init] no custom services found, skipping...
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service 00-legacy: starting
s6-rc: info: service 00-legacy successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/01-envfile
cont-init: info: /etc/cont-init.d/01-envfile exited 0
cont-init: info: running /etc/cont-init.d/01-migrations
[migrations] started
[migrations] no migrations found
cont-init: info: /etc/cont-init.d/01-migrations exited 0
cont-init: info: running /etc/cont-init.d/02-tamper-check
cont-init: info: /etc/cont-init.d/02-tamper-check exited 0
cont-init: info: running /etc/cont-init.d/10-adduser

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/

Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Radarr: https://opencollective.com/radarr

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1029
User gid:    65540
-------------------------------------

cont-init: info: /etc/cont-init.d/10-adduser exited 0
cont-init: info: running /etc/cont-init.d/30-config
cont-init: info: /etc/cont-init.d/30-config exited 0
cont-init: info: running /etc/cont-init.d/90-custom-folders
cont-init: info: /etc/cont-init.d/90-custom-folders exited 0
cont-init: info: running /etc/cont-init.d/99-custom-files
[custom-init] no custom files found, skipping...
cont-init: info: /etc/cont-init.d/99-custom-files exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-mods: starting
s6-rc: info: service init-mods successfully started
s6-rc: info: service init-mods-package-install: starting
s6-rc: info: service init-mods-package-install successfully started
s6-rc: info: service init-mods-end: starting
s6-rc: info: service init-mods-end successfully started
s6-rc: info: service init-services: starting
s6-rc: info: service init-services successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun radarr (no readiness notification)
s6-rc: info: service legacy-services successfully started
s6-rc: info: service 99-ci-service-check: starting
[ls.io-init] done.
s6-rc: info: service 99-ci-service-check successfully started
github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

emberduck commented 1 year ago

Exact same issue and error log as @myna-me above for all *arr containers. Have removed and redeployed containers.

Running Docker in latest version of DSM6 on DS1512+. Any other ways to troubleshoot?

myna-me commented 1 year ago

@emberduck , Interestingly enough I am also on a DS1512+, with the latest DSM6.2.4U6. I have tried starting from scratch, starting from portainer, starting from Synology's wizzard and my usual, starting from ansible. Same situation. but no problems with version 4.0.5 At some point i had it running on a different docker server in a ubuntu box, with the storage mounted as NFS. However Radarr is very chatty and having the DB on the network was causing a lot of issues on the DB, so i decided to move to the synology where it would be local.

Could anyone give @emberduck and me some ideas on where else to look? Thank you!

emberduck commented 1 year ago

@emberduck , Interestingly enough I am also on a DS1512+, with the latest DSM6.2.4U6.

Nice! Not getting much activity here, wonder if it’s worth rolling back to an older version until it gets looked at.

j0nnymoe commented 1 year ago

@emberduck if you could actually provide some logs and your docker compose you're using to deploy the container, that would be helpful. @myna-me please could you provide your compose?

emberduck commented 1 year ago

@j0nnymoe Sorry mate, didn't mean to come off in a negative way. Appreciate the help.

Compose file:

version: "3.8"
services:
  linuxserver-radarr:
    image: linuxserver/radarr:latest
    container_name: radarr
    environment:
      - PUID=xxxx
      - PGID=xxx
      - TZ=Europe/London
    volumes:
      - /path/to/my/data:/config
      - /path/to/my/movies:/movies
    ports:
      - 7878:7878
    restart: unless-stopped

Radarr log:

[custom-init] No custom files found, skipping...
cont-init: info: /etc/cont-init.d/99-custom-files exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-mods: starting
s6-rc: info: service init-mods successfully started
s6-rc: info: service init-mods-package-install: starting
s6-rc: info: service init-mods-package-install successfully started
s6-rc: info: service init-mods-end: starting
s6-rc: info: service init-mods-end successfully started
s6-rc: info: service init-services: starting
s6-rc: info: service init-services successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun radarr (no readiness notification)
s6-rc: info: service legacy-services successfully started
s6-rc: info: service 99-ci-service-check: starting
[ls.io-init] done.
s6-rc: info: service 99-ci-service-check successfully started
s6-rc: info: service 99-ci-service-check: stopping
s6-rc: info: service 99-ci-service-check successfully stopped
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service init-services: stopping
s6-rc: info: service 00-legacy: stopping
s6-rc: info: service init-services successfully stopped
s6-rc: info: service init-mods-end: stopping
s6-rc: info: service 00-legacy successfully stopped
s6-rc: info: service init-mods-end successfully stopped
s6-rc: info: service init-mods-package-install: stopping
s6-rc: info: service init-mods-package-install successfully stopped
s6-rc: info: service init-mods: stopping
s6-rc: info: service init-mods successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
[custom-init] No custom services found, skipping...
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service 00-legacy: starting
s6-rc: info: service 00-legacy successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/01-envfile
cont-init: info: /etc/cont-init.d/01-envfile exited 0
cont-init: info: running /etc/cont-init.d/01-migrations
[migrations] started
[migrations] no migrations found
cont-init: info: /etc/cont-init.d/01-migrations exited 0
cont-init: info: running /etc/cont-init.d/10-adduser
usermod: no changes
-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/
Brought to you by linuxserver.io
-------------------------------------
To support the app dev(s) visit:
Radarr: https://opencollective.com/radarr
To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------
User uid:    xxxx
User gid:    xxx
-------------------------------------
cont-init: info: /etc/cont-init.d/10-adduser exited 0
cont-init: info: running /etc/cont-init.d/30-config
cont-init: info: /etc/cont-init.d/30-config exited 0
cont-init: info: running /etc/cont-init.d/99-custom-files
[custom-init] No custom files found, skipping...
cont-init: info: /etc/cont-init.d/99-custom-files exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-mods: starting
s6-rc: info: service init-mods successfully started
s6-rc: info: service init-mods-package-install: starting
s6-rc: info: service init-mods-package-install successfully started
s6-rc: info: service init-mods-end: starting
s6-rc: info: service init-mods-end successfully started
s6-rc: info: service init-services: starting
s6-rc: info: service init-services successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun radarr (no readiness notification)
s6-rc: info: service legacy-services successfully started
s6-rc: info: service 99-ci-service-check: starting
[ls.io-init] done.
s6-rc: info: service 99-ci-service-check successfully started
myna-me commented 1 year ago

Thanks for assisting, @j0nnymoe

Here is the docker run command: docker run -d \ -v /volume2/docker/radarr:/config \ -v /volume2/media/downloads/radarr:/downloads \ -v /volume2/media/movies:/movies \ -e TZ=America/New_York \ -e PUID=0 \ -e PGID=0 \ -p 7878:7878 \ --name "radarr" \ --restart=unless-stopped \ linuxserver/radarr:latest

Here are a couple more things i've done, which may give you some idea:

something else i noticed while in the shell. DMESG showed several time the following error, which may give you another hint: [36284567.908211] Radarr[5099]: segfault at 20 ip 00007fd2313a7714 sp 00007ffd9ed1c320 error 6 in libclrjit.so[7fd23137d000+26e000]

Google-fu showed other people having segfault errors in the past with their NAS devices, but those were arm processors. ours is Intel x86.

As usual, i tried all of the above with v4.0.5 and it is not a problem.

I am attaching the portainer log, showing the initial run, stopping of container, and restart. _radarr_logs.txt

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

myna-me commented 1 year ago

Ok. Last night, through a lot of google, i found a post that mentioned that the newer versions of alpine don't work well with our CPUs (cedarview on the 1512+ in my case). Though the explanation was not satisfactory because i have other linuxserver containers running just fine, I finally took the plunge and migrated everything to use the PostgreSQL DB, so that i can run this container in another server. (this solves the issue of the sqlite db corruption, when storing container files in the synology but running the container on another server). The documentation in the radarr site (HERE) has a couple of issues, but I was definitely able to do it. The main thing to remember, is to NOT restore from a backup. so here are the steps i followed (at a high level):

It runs much better and snappier, it freed up cpu and mem resources on the synology and hey, we can now run on the latest version.

I hope this helps someone.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

theonlyway commented 1 year ago

I don't know if my issue is related to this or not but chances are it is given I'm using a Synology as well. It's a pretty old Snyology Nas a DS412+ with an Intel Atom D2700 on DSM 6.2.4-25556 Update 6. The moment the container attempts to start the Radarr binary is segfaulting with a core dump (I'm not that good with Linux so I don't know what that means or how to debug it. That's just what it says when I exec into the container and attempt to run the Radarr binary manually using the same switches as the init script was trying to). The issue appears to be something to do with either Alpine or the musl variant of Radarr (Or both). If I make changes the dockerfile to use Ubuntu instead of Alpine everything works. I did attempt to just change the package it downloads to just use the linux OS variant instead of the linuxmusl variant but it just resulted in some errors I didn't feel like troubleshooting, so I just went with Ubuntu.

This isn't isolated to just this docker image. I attempted to launch Prowlarr and the same thing happened there. In the case of Radarr these are the changes I made to the Dockerfile to get it working on Ubuntu

# syntax=docker/dockerfile:1

FROM ghcr.io/linuxserver/baseimage-ubuntu:jammy

# set version label
ARG BUILD_DATE
ARG VERSION
ARG RADARR_RELEASE
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"

# environment settings
ARG RADARR_BRANCH="master"
ENV XDG_CONFIG_HOME="/config/xdg"

RUN \
  echo "**** install packages ****" && \
  apt-get update && apt-get install -y \
  curl \
  jq \
  libicu-dev \
  sqlite3 && \
  echo "**** install radarr ****" && \
  mkdir -p /app/radarr/bin && \
  if [ -z ${RADARR_RELEASE+x} ]; then \
  RADARR_RELEASE=$(curl -sL "https://radarr.servarr.com/v1/update/${RADARR_BRANCH}/changes?os=linux&runtime=netcore" \
  | jq -r '.[0].version'); \
  fi && \
  curl -o \
  /tmp/radarr.tar.gz -L \
  "https://radarr.servarr.com/v1/update/${RADARR_BRANCH}/updatefile?version=${RADARR_RELEASE}&os=linux&runtime=netcore&arch=x64" && \
  tar xzf \
  /tmp/radarr.tar.gz -C \
  /app/radarr/bin --strip-components=1 && \
  echo -e "UpdateMethod=docker\nBranch=${RADARR_BRANCH}\nPackageVersion=${VERSION}\nPackageAuthor=[linuxserver.io](https://linuxserver.io)" > /app/radarr/package_info && \
  echo "**** cleanup ****" && \
  rm -rf \
  /app/radarr/bin/Radarr.Update \
  /tmp/*

# copy local files
COPY root/ /

# ports and volumes
EXPOSE 7878

VOLUME /config

Downside is I now have a bigger image (Because Ubuntu is bigger than Alpine) and I have to build a new image locally when I want to update it but at least it works until someone smarter than me is able to figure out where the actual problem is.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 1 year ago

This issue is locked due to inactivity