linuxserver / docker-sabnzbd

GNU General Public License v3.0
259 stars 67 forks source link

Slow file transfers with new container #136

Closed herkalurk closed 1 year ago

herkalurk commented 2 years ago

Expected Behavior

Transfers to and from volumes on the container occur at speeds dictated by hardware

Current Behavior

Copies out of container are slow now (under 10 MB/sec)

Steps to Reproduce

Unsure, transfer speeds used to be faster until newest containers were used.

The filesystems I'm moving files to are mounted locally in linux file system from NAS (CIFS mounts)

Environment

OS: Centos 7 CPU architecture: x86_64 How docker service was installed: docker-ce official repo in yum

Command used to create docker container (run/create/compose/screenshot)

version: "2.1"
services:
  sabnzbd:
    image: linuxserver/sabnzbd
    container_name: sabnzbd
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Chicago
    volumes:
      - /home/docker/sabnzbd:/config
      - /media/usenet:/media/usenet
      - /media/movies:/media/movies
      - /media/tv:/media/tv
    ports:
      - 8085:8085
    restart: unless-stopped
networks:
  default:
    external:
      name: sickly

Docker logs

2022-06-07 17:02:40,561::INFO::[nzbparser:89] Attempting to add Ecco.2019.1080p.WEBRip.x264-xpost.nzb
2022-06-07 17:02:40,563::INFO::[nzbstuff:762] Replacing spaces with underscores in Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:02:40,563::INFO::[filesystem:703] Creating directories: /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:02:40,563::INFO::[filesystem:703] Creating directories: /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/__ADMIN__
2022-06-07 17:02:40,564::INFO::[filesystem:1233] Saving /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/__ADMIN__/Ecco.2019.1080p.WEBRip.x264-xpost.nzb.gz
2022-06-07 17:02:40,587::INFO::[nzbparser:477] File [PRiVATE]-[WtFnZb]-[newz[NZB].nfo]-[3+3] - '' yEnc  13 (1+1) added to queue
2022-06-07 17:02:40,588::INFO::[nzbparser:477] File 2_English.srt added to queue
2022-06-07 17:02:40,644::INFO::[nzbparser:477] File Ecco.2019.1080p.WEBRip.x264-RARBG.mp4 added to queue
2022-06-07 17:02:40,686::INFO::[nzbqueue:236] Saving queue
2022-06-07 17:02:40,687::INFO::[notifier:123] Sending notification: NZB added to queue - Ecco.2019.1080p.WEBRip.x264-xpost.nzb (type=download, job_cat=radarr)
2022-06-07 17:02:44,887::INFO::[downloader:676] 7@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,144::INFO::[happyeyeballs:97] Quickest IP address for news-us.usenetserver.com (port 563, preferipv6 False) is 85.12.62.221
2022-06-07 17:02:53,145::INFO::[downloader:676] 3@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,185::INFO::[downloader:676] 5@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,186::INFO::[downloader:676] 1@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,187::INFO::[downloader:676] 2@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,188::INFO::[downloader:676] 4@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,189::INFO::[downloader:676] 6@news-us.usenetserver.com: Initiating connection
2022-06-07 17:02:53,306::INFO::[newswrapper:347] 3@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,330::INFO::[newswrapper:347] 7@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,344::INFO::[newswrapper:347] 5@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,345::INFO::[newswrapper:347] 1@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,348::INFO::[newswrapper:347] 2@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,349::INFO::[newswrapper:347] 6@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:53,358::INFO::[newswrapper:347] 4@news-us.usenetserver.com: Connected using TLSv1.3 (TLS_AES_256_GCM_SHA384)
2022-06-07 17:02:54,382::INFO::[downloader:893] Connecting 5@news-us.usenetserver.com finished
2022-06-07 17:02:54,383::INFO::[downloader:893] Connecting 6@news-us.usenetserver.com finished
2022-06-07 17:02:54,393::INFO::[downloader:893] Connecting 1@news-us.usenetserver.com finished
2022-06-07 17:02:54,394::INFO::[downloader:893] Connecting 2@news-us.usenetserver.com finished
2022-06-07 17:02:54,395::INFO::[downloader:893] Connecting 4@news-us.usenetserver.com finished
2022-06-07 17:02:54,396::INFO::[downloader:893] Connecting 3@news-us.usenetserver.com finished
2022-06-07 17:02:54,397::INFO::[downloader:893] Connecting 7@news-us.usenetserver.com finished
2022-06-07 17:02:54,492::INFO::[assembler:93] Decoding finished /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/[PRiVATE]-[WtFnZb]-[newz[NZB].nfo]-[3+3] - '' yEnc  13 (1+1)
2022-06-07 17:02:54,629::INFO::[assembler:93] Decoding finished /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/2_English.srt
2022-06-07 17:04:57,707::INFO::[nzbqueue:784] [N/A] Ending job Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:04:57,802::INFO::[assembler:93] Decoding finished /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/Ecco.2019.1080p.WEBRip.x264-RARBG.mp4
2022-06-07 17:04:57,816::INFO::[nzbqueue:395] [N/A] Removing job Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:04:57,816::INFO::[nzbqueue:236] Saving queue
2022-06-07 17:04:57,817::INFO::[postproc:129] Saving postproc queue
2022-06-07 17:04:58,751::INFO::[postproc:370] Starting Post-Processing on Ecco.2019.1080p.WEBRip.x264-xpost => Repair:True, Unpack:True, Delete:True, Script:Default, Cat:radarr
2022-06-07 17:04:58,751::INFO::[notifier:123] Sending notification: Post-processing - Ecco.2019.1080p.WEBRip.x264-xpost (type=pp, job_cat=radarr)
2022-06-07 17:04:58,752::INFO::[postproc:727] Starting verification and repair of Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:04:58,752::INFO::[filesystem:1139] [N/A] /config/Downloads/incomplete/Ecco.2019.1080p.WEBRip.x264-xpost/__ADMIN__/__verified__ missing
2022-06-07 17:04:58,754::INFO::[postproc:774] No par2 sets for Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:05:00,163::INFO::[postproc:808] Verification and repair finished for Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:05:00,163::INFO::[downloader:413] Forcing disconnect
2022-06-07 17:05:00,164::INFO::[filesystem:321] Checking if any resulting filenames need to be sanitized
2022-06-07 17:05:00,234::INFO::[filesystem:703] Creating directories: /media/movies/radarr/Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:05:00,245::INFO::[postproc:428] Running unpacker on Ecco.2019.1080p.WEBRip.x264-xpost
2022-06-07 17:05:00,246::INFO::[postproc:430] Unpacked files []
2022-06-07 17:05:00,246::INFO::[filesystem:321] Checking if any resulting filenames need to be sanitized
2022-06-07 17:05:00,247::INFO::[postproc:434] Finished unpack_magic on Ecco.2019.1080p.WEBRip.x264-xpost
herkalurk commented 2 years ago

Just had it happen again, nas is reporting less than 10 MB/sec transfers, but SAB just downloaded from the internet at over 25 MB/sec. Seems like it has something to do with the smb connection setup. I'm going to try directly making a local volume in docker.

image

image

aptalca commented 2 years ago

It looks like an issue with your NAS reporting then as sab is clearly giving you the correct speed

herkalurk commented 2 years ago

SAB doesn't report how fast it unpacks, only how fast it downloads. Also, this isn't consistent. Last week sabnzbd container had no problem unpacking at near 70 MB/sec without changing anything.

I just changed my docker-compose and moved to local docker volumes that are referenced in the compose file, instead of mounting the cifs share onto the linux server in /etc/fstab.

docker volume create --driver local --opt type=cifs --opt device=//synthy/movies --opt o=addr=192.168.1.50,username=******,password=******,file_mode=0664,dir_mode=0775,vers=3.0,uid=1000,gid=1000 --name synthy-movies
docker volume create --driver local --opt type=cifs --opt device=//synthy/tv --opt o=addr=192.168.1.50,username=******,password=******,file_mode=0664,dir_mode=0775,vers=3.0,uid=1000,gid=1000 --name synthy-tv
version: "2.1"
services:
  sabnzbd:
    image: linuxserver/sabnzbd
    container_name: sabnzbd
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Chicago
    volumes:
      - /home/docker/sabnzbd:/config
      - /media/usenet:/media/usenet
      - synthy-movies:/media/movies
      - synthy-tv:/media/tv
    ports:
      - 8085:8085
    restart: unless-stopped
networks:
  default:
    external:
      name: sickly
volumes:
  synthy-tv:
    external: true
  synthy-movies:
    external: true
herkalurk commented 2 years ago

It really seems as if the networking for the container is being rate limited. I'm still trying to understand how I can just stop the docker service from putting any restrictions on the containers.

herkalurk commented 2 years ago

@aptalca I've seen this behavior in other linuxserver docker containers as well. I've been using linuxserver radarr and sonarr containers for over a year. From the start the radarr conatiner could only transfer out to cifs mounts at 8 MB/sec while sonarr container could sustain near 70 MB/sec. However in the last month that has changed and now the sonarr container is exhibiting the same restrictions. The volumes are mounted the same on the system in /etc/fstab and also configured the same in the docker-compose files.

herkalurk commented 2 years ago

It looks like an issue with your NAS reporting then as sab is clearly giving you the correct speed

Are you assuming I'm downloading direct to the NAS? I have a separate server that has docker installed. That server connects to NAS via CIFS shares. So when I'm downloading from the internet I have no issues INTO the container. it's transferring out that's where I'm seeing the slower speeds.

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

chenks commented 2 years ago

i think i'm seeing similar rate limiting issues.

i'm running this docker on a Pi 4 connected via gigabit, and Sabnzbd is not able to use the full available bandwidth, seems to never go above 104Mbps (or thereabouts).

herkalurk commented 2 years ago

i'm running this docker on a Pi 4 connected via gigabit, and Sabnzbd is not able to use the full available bandwidth, seems to never go above 104Mbps (or thereabouts).

Have you confirmed that the link is running at 1000 mbps directly on the PI?

chenks commented 2 years ago

i'm running this docker on a Pi 4 connected via gigabit, and Sabnzbd is not able to use the full available bandwidth, seems to never go above 104Mbps (or thereabouts).

Have you confirmed that the link is running at 1000 mbps directly on the PI?

Yes. Used iperf and getting 940Mbps between the pi and other devices

j0nnymoe commented 2 years ago

i think i'm seeing similar rate limiting issues.

i'm running this docker on a Pi 4 connected via gigabit, and Sabnzbd is not able to use the full available bandwidth, seems to never go above 104Mbps (or thereabouts).

I mean, I would say that's as expected with a slight loss of speed for the usual overheads (encryption etc).

Scratch that, miss read it

herkalurk commented 2 years ago

In my case, for the most part it would run from the container to the external network somewhere near 1000 Mbit most of the time, but would experience occasions where the unpack jobs would only seem to run around 100 Mbit, NAS reporting transfer rates around 8 MB/s. Most often the connection to the NAS runs over 65 MB/s, so including over head from the docker subsystem, that's fine on a 1000 Mbit link. I generally don't get giant files, and most of the operations are automatic, so Just find out about downloads after the fact. However if say an entire TV season was queued then having the slow unpacks isn't ideal....

j0nnymoe commented 2 years ago

I think while you're getting these slow speeds, you need to do some system resource checks using htop / iotop. This should show you where the bottleneck is.

chenks commented 2 years ago

My ISP connection is 200Mbps and when using sab on other devices I can easily max out my connection at around 27MB/s consistently

However when running it in this rocker on my pi in rarely seeing it get around half of that

Is it possible to install iperf inside the container to run a test from inside the container?

j0nnymoe commented 2 years ago

You could but just having sab running and looking to see if there is an IO bottleneck is probably easier

chenks commented 2 years ago

how would i install it inside the container? it doesn't seem to have apt-get

also, do you mean using htop / iotop from inside the container or just on the pi itself?

chenks commented 2 years ago

You could but just having sab running and looking to see if there is an IO bottleneck is probably easier

still looking for some assistance on this

should not that even with "direct uncack" disabled i still don't get any where near maxing out my connection, it seems to limit itself around 10-13MB/s (should be close to 26MB/s)

attached is a snapshot of htop from the pi whilst a download was taking place htop

attached is a snapshot of iotop from the pi whilst a download was taking place (and i saw a peak of around 27MB/s at one point) Screenshot 2022-08-15 115053

in my instance sabnzbd is set to save to ~/chenks/downloads which is symlinked to a samba network mount on a NAS (/mnt/smbshare). iperf from the pi to the NAS shows it is connected via gigabit and is able to achive 940Mbps in testing.

j0nnymoe commented 2 years ago

While iperf tests are good tests, you need to test the speeds via the same protocol that your nas is connecting to. See what speed you get when you manually transfer a file from the pi to your nas mounted folder.

j0nnymoe commented 2 years ago

Also, as you're not the original poster, please could you provide more info about your setup, mainly your docker-compose so we can get an idea of how you've deployed it.

chenks commented 2 years ago

While iperf tests are good tests, you need to test the speeds via the same protocol that your nas is connecting to. See what speed you get when you manually transfer a file from the pi to your nas mounted folder.

i will do a test just now, any suggestions on a tool to use to show the speed of the transfer? the 'cp' command copies a file but doesn't show any status until it's finished copying

i used pv to copy a fle from the NAS to the home folder. whilst it wasn't a steady speed i did see peaks of around 80MiB/s. the 2.08GiB file took 2min8sec to complete the transfer

Screenshot 2022-08-15 145250

and copy the same file from the Pi to the NAS took 53seconds to complete the transfer

Screenshot 2022-08-15 145301

Also, as you're not the original poster, please could you provide more info about your setup, mainly your docker-compose so we can get an idea of how you've deployed it.

no probs i can do that, would it be better if i raised a new issue or is it ok to continue within this one?

  sabnzbd:
    image: ghcr.io/linuxserver/sabnzbd
    container_name: sabnzbd
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=UTC
    volumes:
      - ./volumes/sabnzbd/config:/config
      - ./downloads:/downloads
      - ./downloads/incomplete-downloads:/incomplete-downloads #optional
    ports:
      - 8080:8080
      - 9090:9090

EDIT - i'm wondering if i'm maybe being limited by the SD card the Pi is using? (although think it should still be capable of 27MB/s in both directions). this is what it's currently using

IMG_1289

chenks commented 2 years ago

Also, as you're not the original poster, please could you provide more info about your setup, mainly your docker-compose so we can get an idea of how you've deployed it.

i've resolved this issue and it was, indeed, an issue with the SD card (as i thought it might be in my earlier post) i've replace the SD card with a higher spec one, and now i can get 25Mb/s on a download which is the around the max i can get from my internet connection.

not sure if this would resolve the OPs issue, but it has resolved mine. replaced it with this

Screenshot 2022-08-22 142048

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 1 year ago

This issue is locked due to inactivity