codefaux / deemix-for-lidarr

(Theoretically) deemix patched for Lidarr addon use
25 stars 2 forks source link

Step By Step Guide #3

Closed dfatih closed 5 months ago

dfatih commented 5 months ago

Hey, would you be so kind, to give a detailed step by step guide to install it on a NAS?

codefaux commented 5 months ago

"a NAS" -- that's like saying, "a computer."

I assume you're the one who was asking regarding Synology's proprietary products? (My memory is not great.) I figured I'd take a look at it, but since I don't have one, it's kind of guesswork and I'm unwilling to tell you step by step what to click without having access to one, to do so myself.

It looks kind of awful. I don't know if it has Docker support built in, or how to create a new container without extra tools. I was able to find that semi-recently they added Container Manager -- I can't tell if it's a built-in feature or a downloadable, installable plugin. If you're experienced with Docker, you should be able to figure out how things work from there, an example docker-compose.yaml is provided on my repo and should be cake-easy to modify from system to system.

If you're not experienced with Docker, I apologize but I'm unalbe to help. Synology's OS and management software are closed-source, closed-access, proprietary, will only run on their own hardware, and I don't have any of it.

If anyone finds this and has a Synology system they can write up a quick guide on, feel free to comment - I'll leave this Issue open.

dfatih commented 5 months ago

Oh yeah my bad. I was refering to Synology. Ive already installed Docker with Portainer. You can SSH access the devices to run docker compose files or either run Docker CLI comands without ssh

codefaux commented 5 months ago

Ah, roger -- in that case, it sounds like you probably have everything you need? There's a docker-compose.yaml provided on my repo, info for what to configure in Lidarr and Deemix both. If you understand the Compose format you can reverse it to assemble a container in Portainer very easily, it's only a few paths and ports and the image name.

I suppose to clarify; which part are you having trouble figuring out?

dfatih commented 5 months ago

Well over the Docker Package ive downloaded the lidarr image with the plugin release. and looked into the yaml file. the image name confuses me. it says: codefaux/deemix-for-lidarr. how can i build from your github repo a tar file which i can upload to the docker package?

codefaux commented 5 months ago

It sounds like you don't understand Docker as a concept, but please correct me if I'm mistaken. I don't mean to offend, this is just the impression I'm getting.

You've installed the Docker package, which installed and set up the Docker service. Your interaction with the package is done, you will now be interacting with Docker itself, via docker-compose on CLI or Portainer on WebUI (or similar)

Docker automatically downloads the image by itself, you do not provide a tar file in any situation. If you must build the Docker container yourself (fully and wholly not required) you may clone the repository, and run docker buildx build . from within the container's directory -- however, it will not be linked, so you will need to provide the SHA from the build process instead of an image name, or add --tag <name> to the build process. Docker will create the image (using the instructions within Dockerfile ) and store it (disassembled; Docker uses images by multiple 'layers' and merges them dynamically at runtime) for you to use or upload to a repository.

This is all handled for you, however.

The docker-compose.yaml tells Docker how to do all of the things it needs to do. It's like running "apt-get install sshd && systemctl start sshd" -- Docker arranges dependencies and installation by itself, as well as handling starting and running the container, linking it to resources, etc. The dependencies are fully self-contained so your core system does not need modification. The filesystems are isolated, so you must provide Path or Volume mappings to give it access to the "outside world" etc. The 'image' is merely a pointer to several layers of a filesystem which assemble into a running service, with instructions on how to put them together. This allows multiple Docker services to use the same layers, reducing overall disk footprint while simplifying dependencies, setup, installation, etc.

It helps to think of Docker as a VM -- it is NOT a VM, but provides similar-feeling isolation to the end user. (This is dramatic over-simplification, this is not to be taken as an understanding or implication of similar security.) However, like a VM, it cannot access certain hardware or any external filesystems without being configured. This is where Ports, Volumes, etc come in.

Ports are similar to port forwarding -- if the container runs something on port 9000, and you want it on port 2456 of the host, this is entered into docker-compose.yaml

https://docs.docker.com/compose/compose-file/compose-file-v3/#ports

ports:
  - "3000"  # open 3000 on host, to 3000 on container
  - "3000-3005"  # open 3000-3005 on all host interfaces, to 3000-3005 on container
  - "8000:1234"  # open 8000 on all host interfaces, to 1234 on container.
  - "9090-9091:8080-8081" # open 9090-9091 on all host interfaces, to 8080-8081 on container.
  - "127.0.0.1:8001:9001" # open 8001 on only 127.0.0.1, to 9001 on container.
  - "192.168.200.201:5000-5010:5000-5010" # open range on host's 192.168.200.201, to range on container
  - "127.0.0.1::5000" # open 5000 on 127.0.0.1 to same on container
  - "6060:6060/udp" # /udp or /tcp to specify, tcp is assumed by default
  - "12400-12500:1240" # forward several ports on host to single port on container

(The IP-specific ports can only be assigned to an IP address active on the host. If you need to generate a specific IP per Docker container, unique specifically to the Docker container, it can be done but grows more complex.)

Volumes are either volume images (a bunch of files stored inside another file, this is probably NOT what you want) or bind mounts (giving the container access to specific folders.)

https://docs.docker.com/compose/compose-file/compose-file-v3/#volumes

volumes:
  # Just specify a path and let the Docker create a volume image
  - /var/lib/mysql

  # Specify an absolute path bind mount -- /opt/data on your server will appear at /var/lib/mysql inside the container
  - /opt/data:/var/lib/mysql

  # Named volume -- Docker will create or load a volume image called 'datavolume' and the container will create any files from /var/lib/mysql inside said volume image.
  - datavolume:/var/lib/mysql

The environment section sets environment variables, like PUID, PGID, etc -- similar to export from the Linux CLI. Usually containers use specifically-named environment variables for internal configuration items; each container will need to document which variables are supported/expected and their syntax.

This is a barely surface-level brief on how Docker works, ideally it helps. I'm more than happy to clarify other points as required/requested. If you prefer more realtime conversation, Telegram is the only messenger I currrently use. If you also use it, we can continue to discuss either here or there.

dfatih commented 5 months ago

No not at all. Im a beginner and happy to learn. i thought to make the compose work the image has to be registered in the hub.docker site. let me read the docs and try to get it work. really thankful for your help. so basically i just have to do: docker-compose up?

after sudo docker-compose up -d. I am getting this error: ERROR: The Compose file './docker-compose.yaml' is invalid because: 'name' does not match any of the regexes: '^x-'

You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1. For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

dfatih commented 5 months ago

Solved it. lidarr: container_name: lidarr image: hotio/lidarr:pr-plugins restart: unless-stopped logging: driver: json-file options: max-file: ${DOCKERLOGGING_MAXFILE} max-size: ${DOCKERLOGGING_MAXSIZE} labels:

codefaux commented 5 months ago

@dfatih That looks good, except Deemix's downloads directory needs to also be shared with Lidarr, or Lidarr won't be able to import the music. If you use the same container path ( /downloads ) on both containers, it works without setting Remote Paths in Lidarr. (ie, just add - ${DOCKERCONFDIR}/deemix/downloads:/downloads to the volumes in the Lidarr container.)

dfatih commented 5 months ago

I have this weird problem and do not know how to solve it. Deemix can download the files and Lidarr sees them but getting stuck at Waiting to Import. I looked into the log files and lidarr is locking the database. I am constantly getting this error: [v2.1.1.3878] code = Busy (5), message = System.Data.SQLite.SQLiteException (0x800007AF): database is locked database is locked at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery(CommandBehavior behavior) at Dapper.SqlMapper.ExecuteCommand(IDbConnection cnn, CommandDefinition& command, Action2 paramReader) in /_/Dapper/SqlMapper.cs:line 2858 at Dapper.SqlMapper.ExecuteImpl(IDbConnection cnn, CommandDefinition& command) in /_/Dapper/SqlMapper.cs:line 581 at NzbDrone.Core.Datastore.BasicRepository1.UpdateFields(IDbConnection connection, IDbTransaction transaction, TModel model, List1 propertiesToUpdate) in ./Lidarr.Core/Datastore/BasicRepository.cs:line 385 at NzbDrone.Core.Datastore.BasicRepository1.SetFields(TModel model, Expression`1[] properties) in ./Lidarr.Core/Datastore/BasicRepository.cs:line 335 at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Update(CommandModel command, CommandStatus status, String message) in ./Lidarr.Core/Messaging/Commands/CommandQueueManager.cs:line 258 at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Complete(CommandModel command, String message) in ./Lidarr.Core/Messaging/Commands/CommandQueueManager.cs:line 206 at NzbDrone.Core.Messaging.Commands.CommandExecutor.ExecuteCommand[TCommand](TCommand command, CommandModel commandModel) in ./Lidarr.Core/Messaging/Commands/CommandExecutor.cs:line 115 at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid3[T0,T1,T2](CallSite site, T0 arg0, T1 arg1, T2 arg2) at NzbDrone.Core.Messaging.Commands.CommandExecutor.ExecuteCommands() in ./Lidarr.Core/Messaging/Commands/CommandExecutor.cs:line

codefaux commented 5 months ago

From what I'm seeing by looking around Google for ten minutes the error is expected to be caused externally on the host system. The database is locked any time it's in use by Lidarr, one thread or another is not releasing it when another is attempting to use it. This could be a Synology issue, it could be overloaded I/O, etc, but it appears to be unrelated to the Servarr projects.

Almost every single report of "Database is locked" on Servarr projects -- aka Lidarr, Sonarr, Radarr -- are on Synology NAS, BTW. I am increasingly happy to not be locked into Synology hardware every day.

One other big common thing is that most people having this issue are using NFS shares, and the single most common suggestion is "stop using NFS for Docker containers." I'm not familiar with NFS enough to help there, I've never managed to get it to work in a completely reliable/performant manner.

The short end of it here is, I don't know, but it's not related to my work. I suggest the Servarr project Discord for support.