theotherp / nzbhydra2

Usenet meta search
Other
1.25k stars 75 forks source link

Hydra2 timeout #289

Closed ghost closed 4 years ago

ghost commented 5 years ago

2.2.1 - 2048M Sonarr (v3)

Happening on a regular basis. I do hammer hydra with constant requests but even when I'm not, I'm seeing this.

All indexers are unavailable due to failures All search-capable indexers are temporarily unavailable due to recent indexer errors All rss-capable indexers are temporarily unavailable due to recent indexer errors

Sonarr log:

19-1-25 17:54:06.8|Warn|FetchAndParseRssService|No available indexers. check your configuration.
19-1-25 17:54:06.8|Info|DownloadDecisionMaker|No results found
19-1-25 17:54:06.8|Info|RssSyncService|RSS Sync Completed. Reports found: 0, Reports grabbed: 0

nzbhydra-debuginfos-2019-01-25-17-56.zip

theotherp commented 5 years ago

The log says that the searches take very long but it's hard to say why. I've just released a new version with improved logging. Please:

  1. Update your hydra instance.
  2. In the main config enable the logging marker "Performance".
  3. Trigger the behavior again.
  4. Post your debug infos again.
ghost commented 5 years ago

Here are the logs you requested using 2.2.4.

nzbhydra-debuginfos-2019-01-26-16-47.zip

theotherp commented 5 years ago

I found one part of the process that takes longer than it should (5-10 seconds and in one cases even 35). It's not a part that I ever expected to be slow. I'll try to tune that down.

By the way, 117643 downloads? Jesus...

theotherp commented 5 years ago

I've made some improvements which should hopefully reduce the search times.

If you don't need the history and stats I recommend you enable that the history is only kept for 14 days or so. (See "Keep history for..." in the searching settings).

The startup of the new version after the update might take a while because some of the data needs to be migrated. See if the searches are faster now. You can just search the log for limit has been reached for indexer. Ideally the reported runtime should be very short.

ghost commented 5 years ago

I seem to be stuck upgrading from 2.2.4 to 2.2.5. Been about 3 minutes upgrading so far.

2019-01-26 22:01:34.900  INFO --- [           main] org.nzbhydra.NzbHydra                    : Starting NzbHydra on 8f0cc46f83cf with PID 290 (/app/lib/core-2.2.5-exec.jar started by hotio in /app)
2019-01-26 22:01:34.906 DEBUG --- [           main] org.nzbhydra.NzbHydra                    : Running with Spring Boot v2.0.4.RELEASE, Spring v5.0.8.RELEASE
2019-01-26 22:01:34.906  INFO --- [           main] org.nzbhydra.NzbHydra                    : The following profiles are active: default
2019-01-26 22:01:36.486  INFO --- [           main] org.nzbhydra.config.BaseConfig           : Using data folder /config/app
2019-01-26 22:01:36.537 DEBUG --- [           main] org.nzbhydra.config.ConfigReaderWriter   : CONFIG_READ_WRITE: Using temporary file /config/app/nzbhydra.yml.bak
2019-01-26 22:01:36.538 DEBUG --- [           main] org.nzbhydra.config.ConfigReaderWriter   : CONFIG_READ_WRITE: Copying temporary file to /config/app/nzbhydra.yml
2019-01-26 22:01:36.565  INFO --- [           main] a.HydraGlobalMethodSecurityConfiguration : Enabling auth type FORM
2019-01-26 22:01:37.354 DEBUG --- [ost-startStop-1] o.n.a.HydraAnonymousAuthenticationFilter : Initializing filter 'hydraAnonymousAuthenticationFilter'
2019-01-26 22:01:37.355 DEBUG --- [ost-startStop-1] o.n.a.HydraAnonymousAuthenticationFilter : Filter 'hydraAnonymousAuthenticationFilter' configured successfully
2019-01-26 22:01:37.451  WARN --- [           main] org.flywaydb.core.Flyway                 : Flyway.setCallbacks(FlywayCallback) has been deprecated and will be removed in Flyway 6.0. Use Flyway.setCallbacks(Callback) instead.
2019-01-26 22:01:37.486  INFO --- [           main] o.f.core.internal.util.VersionPrinter    : Flyway Community Edition 5.1.4 by Boxfuse
2019-01-26 22:01:37.489  INFO --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2019-01-26 22:01:37.679  INFO --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2019-01-26 22:01:37.683  INFO --- [           main] o.f.c.internal.database.DatabaseFactory  : Database: jdbc:h2:file:/config/app/database/nzbhydra (H2 1.4)
2019-01-26 22:01:37.783  INFO --- [           main] o.f.core.internal.command.DbValidate     : Successfully validated 17 migrations (execution time 00:00.044s)
2019-01-26 22:01:37.804  INFO --- [           main] o.f.core.internal.command.DbMigrate      : Current version of schema "PUBLIC": 1.14
2019-01-26 22:01:37.811  INFO --- [           main] o.f.core.internal.command.DbMigrate      : Migrating schema "PUBLIC" to version 1.15 - ADD SHORT ACCESSTYPE
2019-01-26 22:01:42.482  INFO --- [           main] o.f.core.internal.command.DbMigrate      : Migrating schema "PUBLIC" to version 1.16 - FILL SHORT DOWNLOADS
ghost commented 5 years ago

EDIT: Scratch that, eventually worked and just saw I clearly misread your comment. D'oh!

theotherp commented 5 years ago

Yes, there's a lot of data to be moved.

On Sat, 26 Jan 2019, 23:06 Silent-Remux.mkv <notifications@github.com wrote:

EDIT: Scratch that, eventually worked.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/289#issuecomment-457869915, or mute the thread https://github.com/notifications/unsubscribe-auth/ANtAeQMTnBiAEmkUQVf3Y29ZBcSRM-e_ks5vHNFjgaJpZM4aTYVU .

theotherp commented 5 years ago

Have you noticed any improvements?

theotherp commented 5 years ago

I hope so.

ghost commented 5 years ago

@theotherp hey there.

I am still seeing this issue with the latest sonarr v3 / Hydra2 v2.3.16

Here's some new logs.

nzbhydra-debuginfos-2019-03-16-16-32.zip

theotherp commented 5 years ago

I'm not seeing any crashes or timeouts caused by Hydra. Most searches are done in less than 10 seconds. I saw some that took 30 seconds or longer but all those were caused by slow indexer responses (e.g. SimplyNZBs: Successfully executed search call in 28263ms with 100 results).

I think Sonarr times out after 30 seconds but you've configured a timeout of 30 seconds in Hydra. So if an indexer takes very long to respond Sonarr will time out. I recommend reducing the timeout in Hydra to 25 seconds or less.

ghost commented 5 years ago

@theotherp thanks for looking into this. I bumped it down to 20s and I'm still seeing timeouts (in Sonarr). Anything showing in my new logs?

nzbhydra-debuginfos-2019-03-18-08-03.zip

theotherp commented 5 years ago

The longest search took 21069ms. That shouldn't be enough for Sonarr to time out. It's either a connection issue or Hydra does not log the search at all.

Check your Sonarr logs for when Hydra times out and compare that with the Hydra log (and post the appropriate sonarr log). You might need to set the sonarr log level to debug for that.

ghost commented 5 years ago

I wiped my sonarr logs and grabbed some new clean ones.

I've given NZBHydra2 10GB of ram just to rule that out. :)

Categories in indexer: 5030,5040,5045,5050,5060,5070,5080

nzbhydra-debuginfos-2019-03-30-10-14.zip

sonarr log.txt

theotherp commented 5 years ago

I think the issue is the following: Sonarr attempts to download an NZB from Drunken Slug via Sonarr but you've already hit your download limit:

19-3-30 12:02:42.3|Warn|HttpClient|HTTP Error - Res: [GET] https://drunkenslug.com/getnzb/70a5fa13b20349740368134d6b3d01741d4da683.nzb&i=19684&r=APIKEY: 429.429
<?xml version="1.0" encoding="UTF-8"?>
<error code="429" description="Request limit reached. To upgrade your account see the options on the main page https://drunkenslug.com"/>

This is interpreted as an indexer error and Sonarr disables Hydra:

19-3-30 12:02:42.3|Error|Nzbget|API Grab Limit reached for http://nzbhydra2:5076/getnzb/api/1987581053959353281?apikey=(removed)
19-3-30 12:02:42.3|Fatal|SonarrErrorPipeline|Request Failed. POST /api/v3/queue/grab/1305348319

[v3.0.1.418] NzbDrone.Core.Exceptions.ReleaseDownloadException: Downloading nzb failed ---> NzbDrone.Common.Http.TooManyRequestsException: HTTP request failed: [429:429] [GET] at [https://drunkenslug.com/getnzb/70a5fa13b20349740368134d6b3d01741d4da683.nzb&i=19684&r=APIKEY]
  at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00131] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:100 
  at NzbDrone.Common.Http.HttpClient.Get (NzbDrone.Common.Http.HttpRequest request) [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:264 
  at NzbDrone.Core.Download.UsenetClientBase`1[TSettings].Download (NzbDrone.Core.Parser.Model.RemoteEpisode remoteEpisode) [0x00028] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\UsenetClientBase.cs:46 
   --- End of inner exception stack trace ---
  at NzbDrone.Core.Download.UsenetClientBase`1[TSettings].Download (NzbDrone.Core.Parser.Model.RemoteEpisode remoteEpisode) [0x00100] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\UsenetClientBase.cs:67 
  at NzbDrone.Core.Download.DownloadService.DownloadReport (NzbDrone.Core.Parser.Model.RemoteEpisode remoteEpisode) [0x00241] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\DownloadService.cs:96 
  at Sonarr.Api.V3.Queue.QueueActionModule.Grab (System.Int32 id) [0x00017] in M:\BuildAgent\work\63739567f01dbcc2\src\Sonarr.Api.V3\Queue\QueueActionModule.cs:53 
  at Sonarr.Api.V3.Queue.QueueActionModule.<.ctor>b__6_0 (System.Object x) [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\Sonarr.Api.V3\Queue\QueueActionModule.cs:37 
  at (wrapper dynamic-method) System.Object.CallSite.Target(System.Runtime.CompilerServices.Closure,System.Runtime.CompilerServices.CallSite,System.Func`2<object, object>,object)
  at Nancy.Routing.Route+<>c__DisplayClass4.<Wrap>b__3 (System.Object parameters, System.Threading.CancellationToken context) [0x00049] in <bb3027f50b35411088f45475912cc2ff>:0 

19-3-30 12:02:42.3|Debug|Api|[POST] /api/v3/queue/grab/1305348319: 500.InternalServerError (350 ms)
19-3-30 12:02:42.4|Debug|IndexerFactory|Temporarily ignoring indexer NZBHydra - V2 till 3/30/2019 12:17:42 PM due to recent failures.
19-3-30 12:02:42.5|Debug|IndexerFactory|Temporarily ignoring indexer NZBHydra - V2 till 3/30/2019 12:17:42 PM due to recent failures.

I recommend you either update your DS account or configure the download limit for it in Hydra.

I don't see a problem with Hydra here.

ghost commented 5 years ago

here's what it looks like when I search for say 3 shows all at the same time.

2019-04-02_19-12-56

theotherp commented 5 years ago

I believe you but I think I've explained that (at least according to the log) there's no problem with Hydra.

VexedPanda commented 5 years ago

I'm seeing similar issues, and I think the problem is that Hydra isn't properly abstracting away the fact that eg: DrunkenSlug is the one serving the file.

What seems to be happening: Hydra tells Sonarr that the uri to use is the one for DrunkenSlug. (eg: http://drunkenslug.com/nzbToDownload) Sonarr tries to use that, gets a API limit error, and honors that by disabling the indexer that provided the uri: Hydra

What we want to happen is that Hydra tells Sonar that the uri to use is Hydra's (eg: http://localhost:5076/download?uri=http://drunkenslug.com/nzbToDownload) Sonarr tries to use that, Hydra then turns around and tries to get the file from DrunkenSlug (or another indexer, if that's possible). When that fails (because Hydra already knows DrunkenSlug is past it's API limit), Hydra returns some sort of temp error / delayed retry command (not an API limit error) to Sonarr. Sonar then keeps the Hydra indexer enabled and retries this sometime later, at which point hopefully Hydra can talk to DrunkenSlug again.

theotherp commented 5 years ago

@vexedpanda How do you serve your nzbs? Do you use redirect or proxy?

On Sat, 11 May 2019, 03:32 VexedPanda, notifications@github.com wrote:

I'm seeing similar issues, and I think the problem is that Hydra isn't properly abstracting away the fact that eg: DrunkenSlug is the one serving the file.

What seems to be happening: Hydra tells Sonarr that the uri to use is the one for DrunkenSlug. Sonarr tries to use that, gets a API limit error, and honors that by disabling the indexer that provided the uri: Hydra

What we want to happen: Hydra tells Sonar that the uri to use is Hydra's Sonarr tries to use that, Hydra then turns around and tries to get the file from DrunkenSlug or another indexer. If that fails (DrunkenSlug is past it's API limit), Hydra returns some sort of temp error (not an API limit error) to Sonarr. Sonar then keeps the Hydra indexer enabled and retries this sometime later, at which point hopefully Hydra can talk to DrunkenSlug again.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/289#issuecomment-491468654, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6KFKCSUNAH4CFWKSSLPUYOZNANCNFSM4GSNQVKA .

VexedPanda commented 5 years ago

I was just using the defaults, which was redirect. I'll try proxy.

VexedPanda commented 5 years ago

That now seems to result in sonarr reporting "Invalid NZB: Unexpected root element. Expected 'nzb' found 'error'"

theotherp commented 5 years ago

I'll talk to the sonarr devs and try to find a good solution for this issue.

On Sat, 11 May 2019, 22:14 VexedPanda, notifications@github.com wrote:

That now seems to result in sonarr reporting "Invalid NZB: Unexpected root element. Expected 'nzb' found 'error'"

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/289#issuecomment-491540593, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6P2GV4JPPKV3D4YQJLPU4SJNANCNFSM4GSNQVKA .

VexedPanda commented 5 years ago

Thank you. I know this has changed a bit from the original issue reported, would you like me to make this it's own?

As for a solution, I was thinking about it and something over-reaching like a 5xx error code seems too much (since this isn't a server-wide error, and requests for other nzbs may not fail), but 4xx codes don't really have something that's a perfect match.

What may work though is an unconventional use of the retry-after http header combined with a 404 code (where retry-after isn't usually provided). If we can get them on board, that should be sufficient to indicate to sonarr/etc that this is a temporary situation and that they may succeed in getting this file after that timeout period has elapsed. (vs a 410 for something that's permanently gone.)

theotherp commented 4 years ago

NZBHydra will now return proper status codes when an indexer's API hit limit is reached, resulting in *arr trying to download the next result in the list.

VexedPanda commented 4 years ago

I’m curious what code you used that allowed you to fail it, but only temporarily. If it’s permanent, that would lead to *arr giving up entirely and never retrying even after the API limit resets, wouldn’t it?

Eg: sonarr only uses nzbhydra, nzbhydra only has one indexer with this item, and the API limit is reached. *arr needs to retry this item later, but not pause the entire use of nzbhydra.

theotherp commented 4 years ago

@VexedPanda See the discussion here. *arr will only skip this particular release and try 4 hours later. It will try the next release from the list, even if it's from the same indexer (in this case Hydra).

VexedPanda commented 4 years ago

So *arr updated to treat too many requests as item specific. Makes sense, thanks.

davidh2k commented 4 years ago

Hey @theotherp,

we're still experiencing the issue that *arr go over the grab limit of one of the used indexers.

image

corresponding log in hydra:

image

*arr either just stops searching or disables the hydra indexer instead.

theotherp commented 4 years ago

@davidh2k Please create a new issue and follow the instructions in the issue template.

In any case, you still need to configure Hydra in a way that your indexers' hit limits are not reached.

davidh2k commented 4 years ago

They are configured, thats why i thought its still at least related to the original issue. But if you still think I should create a new one I'm happy to do so. ;)

image

image

Greetings David

theotherp commented 4 years ago

I'm closing this issue. Anybody who still has problems with this please open a new issue.