sabnzbd / sabnzbd

SABnzbd - The automated Usenet download tool
http://sabnzbd.org
Other
2.24k stars 338 forks source link

SAB fails to DL all parts but other Downloader (JDownloader) works fine #1258

Closed s0mm3rb closed 4 years ago

s0mm3rb commented 5 years ago

from time to time SAB fails to DL parts completely and therefore can't unpack my files and stops with "not enough repair blocks"

out of curiosity I tried JDownloader once (with the same provider settings) and since then I use JD to "repair" SABs failed DLs

to clarify:

  1. SAB DLs everything despite some parts are smaller than they should be
  2. I manually detect those parts and redownload them with JD using the same .nzb file
  3. I tell SAB to retry the unpacking and it will work

did I misconfigure SAB or something? I have no idea what that could be I mean it's obiously not my provider (?) since the DLs work with JD

thank you

Version: 2.3.4 [2a113f7] OS: FreeNAS 11.1-U7 Python: v2.7.9

Safihre commented 5 years ago

Very strange. What provider do you use? Can you send me a example NZB at safihre@sabnzbd.org?

Safihre commented 5 years ago

Alternatively you can enable Debug logging in the Status window and then after it happens again email me the log (click Show Logging).

s0mm3rb commented 5 years ago

@Safihre thanks for the quick response you've got mail

Safihre commented 5 years ago

Similar to another problem reported on the forum, it seems that for some reason the provider is returning a "Article not found" when SABnzbd requests the articles. However when it is then requested a second time (by JDownloader in this case), the article is present. When the download is repeated using SABnzbd, it succeeds (as reported via email to me by @s0mm3rb ).

sanderjo commented 5 years ago

However when it is then requested a second time (by JDownloader in this case), the article is present.

Wow ... that's nasty. Something intermittent? Or has it to do with a delay?

And does it mean a plain SAB-Retry by the user would/could have worked?

Safihre commented 5 years ago

Wow ... that's nasty. Something intermittent? Or has it to do with a delay?

No these are old posts (80 days). Retry would have fixed it.

s0mm3rb commented 5 years ago
  • Similar to another problem reported on the forum, it seems that for some reason the provider is returning a "Article not found" when SABnzbd requests the articles. However when it is then requested a second time (by JDownloader in this case), the article is present.*

the JDownloader dev Jiaz explained in the german forum, that JDownloader is a bit more "tolerant" to missing articels, if a server reports it as missing but it might be available a few seconds later. apparently they implemented a lot of workarounds to get usenet working: https://board.jdownloader.org/showthread.php?p=438137#post438137

When the download is repeated using SABnzbd, it succeeds

not realy just to make it clear: I download the missing files with JD, copy them to SABs unfinished folder and hit the retry button in the gui. I never tried to DL them again with SAB after JD succeeded, will do that next time

Retry would have fixed it.

I tried that multiple times before I used JD I just hit the retry button in the gui multiple times and if it fails again I DL the files with JD is there another way to do it? maybe force SAB to redownload or request the unfinished files?

thank you regards

Am Di., 19. März 2019 um 11:59 Uhr schrieb Safihre <notifications@github.com

:

Wow ... that's nasty. Something intermittent? Or has it to do with a delay?

No these are old posts (80 days). Retry would have fixed it.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sabnzbd/sabnzbd/issues/1258#issuecomment-474306626, or mute the thread https://github.com/notifications/unsubscribe-auth/AkjY_7plBTnSYsOE9PeoqLX4Yv5SvMHGks5vYMMZgaJpZM4b3gaU .

sanderjo commented 5 years ago

From that JDownloader page:

Zb. melden manche Server das ein Artikel nicht vorhanden wäre und wenige Sekunden später gibts den Artikel dann plötzlich.

(nice Konjunktiv by the way)

That would mean a click on Retry (and maybe another Retry) in SAB should work. So @s0mm3rb ... can you try that with the same and other downloads that fail?

Safihre commented 5 years ago

It seems specific to some usenet servers. Retrying every failed article would reduce the download speed so much.. I doubt it would be the preferred the way to do things.

Hellowlol commented 5 years ago

Can we have a retry counter that tries n times before aborting the entire download?

Safihre commented 5 years ago

I can imagine that we retry only articles for jobs that aren't hopeless.

sanderjo commented 5 years ago

I can imagine that we retry only articles for jobs that aren't hopeless.

... how about: handle it at the end. So, if SAB says "failed. Click on Retry if you want" that SAB itself automatically does 1 or 2 or 3 Retries. Rationale: possibly less code change, and (not sure) bigger time between retrying which could be good (possibly using a different server pool)

Safihre commented 5 years ago

We already have the code to retry articles if they failed and keep a counter. There would be a few seconds in between them. But again, we have so many complaints "it takes too long to fail missing downloads", this would basically triple that time.

jcfp commented 5 years ago

Very strange. What provider do you use?

This ^^

Does this happen with a legit usenet provider or just someone selling access to an nntp cache setup leeching off a consumer level subscription, only caching articles that some (other) client requested before?

Safihre commented 5 years ago

I've also seen/heard it happen on Easyusenet and Newshosting, so legit Omicron providers (since they are our ad partners 😂).

d0m84 commented 5 years ago

I'm affected as well. It would be cool to get an option to enable automatically retry if missing articles.

At the moment sabnzbd is forced to fail or repair when using Abavia.

Safihre commented 5 years ago

@d0m84 so your only provider is Abavia? @s0mm3rb Can you list your providers also?

d0m84 commented 5 years ago

@Safihre Yes. But they have many resellers.

s0mm3rb commented 5 years ago

My "provider" is premiumize.me

Am Di., 9. Apr. 2019 um 09:09 Uhr schrieb Safihre <notifications@github.com

:

@d0m84 https://github.com/d0m84 so your only provider is Abavia? @s0mm3rb https://github.com/s0mm3rb Can you list your providers also?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sabnzbd/sabnzbd/issues/1258#issuecomment-481130213, or mute the thread https://github.com/notifications/unsubscribe-auth/AkjY_8MyJw2-6aIGDyVyf7rhWj7I3J8-ks5vfDyOgaJpZM4b3gaU .

Safihre commented 5 years ago

Managed to reproduce it now from another user's NZB on Blocknews, also Abavia. Will try get into contact with them.

Safihre commented 5 years ago

With a few extra lines of code it will retry failed articles also 3 times and this actually works. Even within ~500ms the article is suddenly present while missing before. But this makes downloads with lots of missing articles lots slower.. Hope I can reach someone at Abavia and get it fixed on their side.

sanderjo commented 5 years ago

Managed to reproduce it now from another user's NZB on Blocknews, also Abavia. Will try get into contact with them.

Something interesting you noticed? Or just plain stupid "No, not there" ... "No, not there" ... "Yes, I have it here" ... ?

Safihre commented 5 years ago

Yep:

2019-04-21 22:36:05,822::DEBUG::[downloader:721] Thread 20@news.bulknews.eu: Article Part867of1038.AC97D7BE1D6B46C0B684E65B81695664@1553457138.local missing (error=430)
2019-04-21 22:36:05,878::DEBUG::[downloader:721] Thread 20@news.bulknews.eu: Article Part867of1038.AC97D7BE1D6B46C0B684E65B81695664@1553457138.local missing (error=430)
2019-04-21 22:36:05,934::DEBUG::[downloader:721] Thread 20@news.bulknews.eu: Article Part867of1038.AC97D7BE1D6B46C0B684E65B81695664@1553457138.local fetched

Seems it's focused on 1 single connection usually (not always the same), but could just be because it's the only available one.

thezoggy commented 5 years ago

if the article is missing, it should be getting retried (user defined - switches > server > max retries)? so your saying that this isnt working like it should?

Safihre commented 5 years ago

Missing articles were never retried, because.. why should they be? The server said they weren't there so we should trust the server and move on to the next server. The retries are only for cases like CRC failures, decoding failures or when the connection gets interrupted. NZBGet also doesn't retry them, but other seem to retry them at the costs of even slower failures of missing downloads.

thezoggy commented 5 years ago

i've always thought the setting was to retry articles (max_art_tries): sab

thezoggy commented 5 years ago

~btw looks like max_art_opt setting got dropped from the gui at some point. (still in config/code). it makes it so the max retries is only applied to 'optional' servers~ option got moved to specials. sure seems like we should fix the max retry code so it actually works.

Safihre commented 5 years ago

i've always thought the setting was to retry articles (max_art_tries):

Yeah but the code has never done much with it, like a never finished feature. See also #1006.

btw looks like max_art_opt setting got dropped from the gui at some point. (still in config/code). it makes it so the max retries is only applied to 'optional' servers

Not gone, it's in Specials.

s0mm3rb commented 5 years ago

I had this again today 13 files were missing a few MB and in total 12 repair blocks short I've downloaded those files with JD and hit retry in SAB GUI and it worked of course @Safihre I've sent you the logs and nzb

Safihre commented 5 years ago

We know the problem, it's a provider problem in my opinion..

puzzledsab commented 5 years ago

A workaround for this might be to add the affected servers two or more times at different priority levels. If you for instance add it three times and give them 20, 8 and 2 connections then the first one with 20 connections would hopefully get ahead of the rest and warm the cache. I have no idea what numbers would be good in reality so some experimentation would be needed. The total number of connections would need to be equal to or lower than the maximum allowed.

thezoggy commented 5 years ago

or just file a complaint with your provider (or vote with your money and get another provider that just works)

sanderjo commented 5 years ago

or vote with your money and get another provider that just works

Indeed!

puzzledsab commented 5 years ago

There are no providers that "just works", they all have drawbacks. Low retention, lots of articles gone because of DMCA/NTD, being backstabbing monopolists and whatnot. I think it's more helpful to find ways to adapt as well as possible to what's out there than insisting that the world should be perfect. I would prefer a server that finds all the articles after some tinkering over one that is very fast when it works but is actually missing more articles.

thezoggy commented 5 years ago

however you feel about dmca is one thing, but break it down to the basic fundamental. if I ask the server for x and it says it doesnt have it, but I can't trust it? thats broken (or to be nice, lets say they are just unreliable).

so regardless, get a provider that is more reliable so you dont have to play this guessing game with the server or come up with hacky workarounds for them lying about it. we all understand dmca and ways to deal with it.

thezoggy commented 5 years ago

yes, every server is different (which is why people use block accounts and have backups) but the difference is that your not having to ask the same server about the same thing multiple times.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.