animetosho / Nyuu

Flexible usenet binary posting tool
224 stars 32 forks source link

Uploads sporadically incomplete? #75

Closed prdp19 closed 2 years ago

prdp19 commented 4 years ago

Hi,

i have a problem that many of my uploads are incomplete.

Examples (binsearch.info)

No matter if they are a few minutes, hours or days old and I absolutely don't know why.

What have I tried so far?

a.) Tried three different providers b.) Uploads with and without post checking c.) par2 files adapted (use multipar)

Unfortunately there is no change. Sometimes uploads go, sometimes not.

With post checking i use "%NYUU%" -h news.xyz.com -S -u username -p password-n 40 -f "Poster <poster@usenet>" -g alt.binaries.group -k1 --skip-errors all -o "%NZB%.nzb" -l 4 -r include "%UPLOADDIR%" Without post checking i use (takes a lot longer for some providers, but not for others) "%NYUU%" -h news.xyz.com -S -u username -p password-n 40 -f "Poster<poster@usenet>" -g alt.binaries.group --skip-errors all -o "%NZB%.nzb" -l 4 -r include "%UPLOADDIR%" via batch.

Where do I have to look for the problem? Is it par2 or nyuu? Or something completely different?

Please urgently for help as many uploads are so pointless.

Thank you!

prdp19 commented 4 years ago

If I download an incomplete NZB I get the following error message:

[numbers] articles had non-matching duplicates
Aborted, cannot be completed

I use one provider for every entire upload. The error message makes no sense?

prdp19 commented 4 years ago

edit

my old uploads with jBinUp are ok. Many files are incomplete ([number] articles were missing), but this is ok. In addition, I have tested three different Usenet providers, all fall out. Some uploads ok, some not.

So, my problem is nyuu? But what am I doing wrong there? How can I prevent this message ([numbers] articles had non-matching duplicates)?

animetosho commented 4 years ago

Where do I have to look for the problem?

Do you get any warnings/errors during the posting process?

Without post checking i use (takes a lot longer for some providers, but not for others)

You sure that it runs slower without post checking? I'd imagine that with post checking, it should be slower.

Assuming you meant with post checking, your provider is probably dropping posts. The post check picks up that the article is missing, waits, and tries to resubmit. If the provider is basically just blackholing the post, then Nyuu will repeat the process until the retry limit is hit, and it gives up. These retries and waits can slow down the process significantly.

I've found that some providers blackhole posts rather randomly and haven't been able to identify particular reasons why. You can try reducing the article size (say, -a500K) and see if that helps. Stripping/changing some NNTP headers may also have an effect.
If you decide to try a different provider, make sure it's on a different backbone to the ones you're using.

How can I prevent this message ([numbers] articles had non-matching duplicates)?

I'm not too familiar with Sabnzbd, but searching the code, it seems to indicate an NZB with duplicate parts.

Which NZB are you using? The one generated by Nyuu, or one from an indexer? If the latter, maybe what's happening is that some posts were detected as failed by Nyuu, and resubmitted. However, the initial post actually did succeed, which means that there's actually a duplicate post. The indexer chooses to list both in its NZB, which confuses the downloader.

You can try to avoid this using the --keep-message-id option, which forces duplicates to use the same Message-ID. You can also increase Nyuu's timeouts so that it tries harder to identify a post as successful, rather than failing early.

I imagine this should only ever occur if you have post checking enabled, but check the warnings/errors emitted to confirm what's going on.

prdp19 commented 4 years ago

Hi,

thank you very much for your detailed answer.

Do you get any warnings/errors during the posting process? No.

You sure that it runs slower without post checking? I'd imagine that with post checking, it should be slower.

Assuming you meant with post checking, your provider is probably dropping posts. The post check picks up that the article is missing, waits, and tries to resubmit. If the provider is basically just blackholing the post, then Nyuu will repeat the process until the retry limit is hit, and it gives up. These retries and waits can slow down the process significantly.

I've found that some providers blackhole posts rather randomly and haven't been able to identify particular reasons why. You can try reducing the article size (say, -a500K) and see if that helps. Stripping/changing some NNTP headers may also have an effect. If you decide to try a different provider, make sure it's on a different backbone to the ones you're using.

That was a misunderstanding.

Of course it takes longer with post checking. Depending on the provider, it is even significantly longer.

Which NZB are you using? The one generated by Nyuu, or one from an indexer? If the latter, maybe what's happening is that some posts were detected as failed by Nyuu, and resubmitted. However, the initial post actually did succeed, which means that there's actually a duplicate post. The indexer chooses to list both in its NZB, which confuses the downloader.

You can try to avoid this using the --keep-message-id option, which forces duplicates to use the same Message-ID. You can also increase Nyuu's timeouts so that it tries harder to identify a post as successful, rather than failing early.

I imagine this should only ever occur if you have post checking enabled, but check the warnings/errors emitted to confirm what's going on.

I think the problem was really -k[>0] (post check)

This is my customized code, it seems to work: "%NYUU%" -h news.xyz.com -S --ssl-ciphers ECDHE-RSA-AES128-SHA -u username -p password-n 40 -f "Poster <poster@usenet>" -g alt.binaries.group -o "%NZB%.nzb" -l 3 -r include "%UPLOADDIR%"

Is it okay or can I tweak it?

One more question.

If I want nyuu NOT to abort when there are connection problems (e.g. too many connections), the following code is sufficient? --skip-errors connect-fail after -g alt.binaries.group

Greetz.

animetosho commented 4 years ago

That's very odd that you're not getting any warnings/errors. I note that you've supplied -l 4 in your command, which generates a lot of output. Are you certain that errors/warnings in the console haven't just been hidden by the verbose output?

The post check tries to cater for posts which go missing. If you're fine with that happening (per your comment regarding JBinUp), then disabling it is fine.

I note that your command doesn't have a space before the -n parameter (as is the same with your other example commands), which I presume is a mistake.

You are correct regarding --skip-errors.

prdp19 commented 4 years ago

That's very odd that you're not getting any warnings/errors. I note that you've supplied -l 4 in your command, which generates a lot of output. Are you certain that errors/warnings in the console haven't just been hidden by the verbose output?

In fact, I can't impose that I missed a mistake simply because it displayed too much.

I now use (finally: D) ERRORLEVEL in batch. As long as nyuu spits out ERRORLEVEL 0, everything should be fine - whether with post check or without, or?

The post check tries to cater for posts which go missing. If you're fine with that happening (per your comment regarding JBinUp), then disabling it is fine.

I note that your command doesn't have a space before the -n parameter (as is the same with your other example commands), which I presume is a mistake.

Yes. github.com incorrectly formatted / saved it. There is usually a space between.

You are correct regarding --skip-errors.

Thank you.

animetosho commented 4 years ago

Error codes are listed here. You may want to consider 0 and 32 as valid, if you're looking to skip some errors.

L2501 commented 3 years ago

binsearch uses giganews, and giganews is shit. all you need to know.