Closed udochrist closed 5 years ago
@udochrist unfortunately the only way of knowing if the nzb is the same is by downloading the nzb file and checking it, by default we pass just the URL to your nzb client (e.g. sabnzbd). Naming is definitely not enough to detect duplicates.
SAB does have a feature to pause duplicates. Combine that with a requested feature to fail a snatch after X amount of time would be a possible solution.
Good Point. As far as i‘ve understood how sab has implemented deduplication only successful downloads are recorded using a hash of the article ids. For my failed files the name would have been perfectly enough, but I see that it might not always be.
Hi, when using more than one nzb indexer and getting failed files (too old, incomplete,...) medusa picks up the same nzb file from different indexers. Trying to ownload and failing again.
While it makes sense to support multiple indexers in that situation the identical nzb will always fail again to dl as its depending upon the news site and not the indexer.
would it be possible to store a crc/hash for failed nzbs and only use a found nzb if name and hash are not already in the list of failed files? i think that even the file name should already be sufficent in this situation for nzbs across indexers. possibly medusa checkes source and file right now and is therefore not finding duplicates across indexers.