danielmiessler / SecLists

SecLists is the security tester's companion. It's a collection of multiple types of lists used during security assessments, collected in one place. List types include usernames, passwords, URLs, sensitive data patterns, fuzzing payloads, web shells, and many more.
https://www.owasp.org/index.php/OWASP_Internet_of_Things_Project
MIT License
58.89k stars 23.95k forks source link

Localized Wordlist #703

Open agyss opened 2 years ago

agyss commented 2 years ago

Localized wordlists would increase the education purpose a lot. The most common german passwords are for sure quite different from the chinese ones, englisch ones, and so on...

The best I found to start of was: https://github.com/scipag/password-list/tree/main/countries

I would be willing to create the lists by my own if someone can provide access to specific raw data. The only requirement for the raw data would be some kind of a link between the password and the users native language. Things I came up with are:

I would be willig to connect the data if it's spread around multiple files, tables, ...

Any ideas about how to handle this? I am aware of the problem with PII-Data.

ItsIgnacioPortal commented 2 years ago

Ok, so this has a couple of challenges:

I personally have a copy of Collection #1, Collection #2 to #5, and the ANTIPUBLIC breaches, totaling:

So that's the first issue taken care of.

But the second one is a bit trickier. More often than not, data breaches are just combos of email:password; no IP address included. I think we might be able to classify the combos by matching the emails to the X most common names for each country in the world.

The third issue is though one for me. My machine isn't beefy enough to go trough 1.2 billion unique combos in a single lifetime 😂. @agyss do you think your machine is up to the task?

agyss commented 2 years ago

I checked for the second issue, by combining these two datasets, we should get a good base for matching: https://web.archive.org/web/20200414235453/ftp://ftp.heise.de/pub/ct/listings/0717-182.zip and https://en.wikipedia.org/wiki/Category%3aLists_of_popular_names

Furthermore I would filter the mailadresses and only take the ones following the pattern firstname.lastname@.... or lastname.firstname@ (with numbers in and after the names to have a higher coverage).

I will do a preprocessing of the wordlists to gain hashsets for all possible firstname.lastname and lastname.firstname combinations.

Long story short I will find a way and do have the performance to process the data.

g0tmi1k commented 2 years ago

Feel free to open up a pull request with it!

DeveloperOl commented 2 years ago

I have some wordlists with tons of common and uncommon language specific words and names, etc (just words not common passwords) for many popular languages. Those could be used for your educational purposes in addition with hashcat rules ;) I can make a pull request if this is of interest. I would create a folder like /Passwords/localized if that is the right place for it. @g0tmi1k

ItsIgnacioPortal commented 2 years ago

I have some wordlists with tons of common and uncommon language specific words and names, etc (just words not common passwords) for many popular languages. Those could be used for your educational purposes in addition with hashcat rules ;) I can make a pull request if this is of interest. I would create a folder like /Passwords/localized if that is the right place for it. @g0tmi1k

That would be very useful for fulfilling this issue :). Please make a pull request @DeveloperOl

DeveloperOl commented 2 years ago

Okay, I will do some cleanup and than make a PR for German, French, Spanish, Polish and Swedish, maybe one by one becuase generating and cleaning up these lists is a pain. Lists over 100MB should be compressed or not @ItsIgnacioPortal ?

ItsIgnacioPortal commented 2 years ago

Lists over 100MB should be compressed or not @ItsIgnacioPortal ?

No, it's fine. Though, I don't know up to what point such a crazy-long list would be useful. Could you limit each list to one hundred thousand lines?

DeveloperOl commented 2 years ago

I created these lists by crawling localized web pages because I realized there were words missing in the existing lists here and I found, that some password hashes were not cracked because of that, but would have been with a complete list (thats how I found this issue). I could limit the wordlist by usage frequency, however that would mean many valid, but uncommon, words are dropping out. I can add top-x lists and the full list in a PR and you can cherry pick if thats okay for you @ItsIgnacioPortal. I just need some finetuning then and recrawling with word count.

ItsIgnacioPortal commented 2 years ago

I created these lists by crawling localized web pages because I realized there were words missing in the existing lists here and I found, that some password hashes were not cracked because of that, but would have been with a complete list (thats how I found this issue). I could limit the wordlist by usage frequency, however that would mean many valid, but uncommon, words are dropping out. I can add top-x lists and the full list in a PR and you can cherry pick if thats okay for you @ItsIgnacioPortal. I just need some finetuning then and recrawling with word count.

Alright! I'm awaiting your PR