Sometimes a data breach is spread across multiple files, for example multiple .sql files generated by sqlmap. We can already pass explicit multiple paths to this tool, but we also need the ability just to index everything in a directory and any subdirectories. This could be done with the same command line args and if the input path is a directory it's just handled differently to if it's a file, but I'll leave that decision up to whoever implements this.
In my original implementation, when scanning an entire directory I first generated a report of the total number of files, types and sizes then prompted if I'd like to proceed, for example (done on all the crap in my temp folder 🤣):
By representing the largest file types first I could get a good idea of how the data is distributed. It then went through the largest file to the smallest and ran pretty much the exact same code we already have in this repo on a file by file basis, adding a distinct count from each file to the console (the output we already have is perfect) before adding it to a single large collection then writing a distinct set of addresses from there (the same address often appears in multiple files) and writing the overall summary to the console.
One more variable: there should be a list of ignored file types that shouldn't be processed. These can be defined in the app config as they're consistent across executions. For example, here's what I currently have defined (these are all file types I've seen in previous breach corpuses but can't extract addresses from):
Sometimes a data breach is spread across multiple files, for example multiple .sql files generated by sqlmap. We can already pass explicit multiple paths to this tool, but we also need the ability just to index everything in a directory and any subdirectories. This could be done with the same command line args and if the input path is a directory it's just handled differently to if it's a file, but I'll leave that decision up to whoever implements this.
In my original implementation, when scanning an entire directory I first generated a report of the total number of files, types and sizes then prompted if I'd like to proceed, for example (done on all the crap in my temp folder 🤣):
By representing the largest file types first I could get a good idea of how the data is distributed. It then went through the largest file to the smallest and ran pretty much the exact same code we already have in this repo on a file by file basis, adding a distinct count from each file to the console (the output we already have is perfect) before adding it to a single large collection then writing a distinct set of addresses from there (the same address often appears in multiple files) and writing the overall summary to the console.
One more variable: there should be a list of ignored file types that shouldn't be processed. These can be defined in the app config as they're consistent across executions. For example, here's what I currently have defined (these are all file types I've seen in previous breach corpuses but can't extract addresses from):