nil0x42 / duplicut

Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking)
GNU General Public License v3.0
883 stars 90 forks source link
c cracking dedupe dictionary duplicate-detection hashcat hashes password password-cracking remove-duplicates uniq unique wordlist wordlist-generator wordlists

Duplicut :scissors:

Quickly dedupe massive wordlists, without changing the order tweet


github workflows codacy code quality lgtm alerts codecov coverage

Mentioned in awesome-pentest

Created by nil0x42 and contributors



:book: Overview

Nowadays, password wordlist creation usually implies concatenating multiple data sources.

Ideally, most probable passwords should stand at start of the wordlist, so most common passwords are cracked instantly.

With existing dedupe tools you are forced to choose if you prefer to preserve the order OR handle massive wordlists.

Unfortunately, wordlist creation requires both:

So i wrote duplicut in highly optimized C to address this very specific need :nerd_face: :computer:


:bulb: Quick start

git clone https://github.com/nil0x42/duplicut
cd duplicut/ && make
./duplicut wordlist.txt -o clean-wordlist.txt

:wrench: Options

:book: Technical Details

:small_orange_diamond: 1- Memory optimized:

An uint64 is enough to index lines in hashmap, by packing size info within pointer's extra bits:

:small_orange_diamond: 2- Massive file handling:

If whole file can't fit in memory, it is split into virtual chunks, in such way that each chunk uses as much RAM as possible.

Each chunk is then loaded into hashmap, deduped, and tested against subsequent chunks.

That way, execution time decreases to at most th triangle number:

:bulb: Throubleshotting

If you find a bug, or something doesn't work as expected, please compile duplicut in debug mode and post an issue with attached output:

# debug level can be from 1 to 4
make debug level=1
./duplicut [OPTIONS] 2>&1 | tee /tmp/duplicut-debug.log