hashview / hashview-old

A web front-end for password cracking and analytics
http://www.hashview.io
GNU General Public License v3.0
619 stars 134 forks source link

Feature Request: Pipe input to enable better support of slow algorithms #348

Open GrepItAll opened 6 years ago

GrepItAll commented 6 years ago

I'm currently dealing with a lot of slow algorithms, for which large dictionaries/rulesets are not appropriate. Some of these algorithms I can only get ~650h/s on.

To counter this, we use smallish dictionaries (maximum 250k words) with small rulesets (Best64 or hob064, or something customised and smaller). Because of how hashcat chunks and loads data into GPUs, we often find that towards the end of a job some of our GPUs will be sitting idle as they've run out of stuff to do (our current job has an hour remaining, but 2 of the 4 GPUs assigned to it have run out of work, so our speed has effectively halved.)

To solve this, you can pipe in 'precomputed' candidates (see this section of the Hashcat FAQ for info: https://hashcat.net/wiki/doku.php?id=frequently_asked_questions#how_to_create_more_work_for_full_speed - specifically the section towards the end of 'More Work' where it discusses slow hashes and using rules 'without amplifiers'.

This means that in some cases where we have 1k words and 64 rules, we have a keyspace of 64k. But the conventional approach means passing in 1k base words to the GPUs and then applying each of the 64 rules. This means only 1k GPU cores are in use (if I understand my GPU architecture correctly), despite our setup having 14,336 cores available. If we pipe in 'precomputed' hashes, we essentially have 64k base words, with no mutations being calculated on GPU. This means that we can utilise all the cores! Hooray!

As a current workaround for Hashview, we can compute the candidates, save them out to a text file and then use this as input (e.g. hashcat64 --stdout wordlist.txt -r rules/best64.rule > input.txt). Problem is, this means you have to generate a new input file (a wordlist) for each wordlist + ruleset combination, which would quickly mean creating and storing dozens of customised rulesets (in our workflow we tailor the wordlists and rules to any intelligence regarding password format that we may have).

This workaround is also not suitable for some of our users who are not as comfortable with the CLI for hashcat (which is why they're using Hashview).

What we would absolutely love to see is either a 'slow hash' toggle which enables piping of this data instead of the conventional approach, or even a toggle that just states it will pipe 'precomputed' candidates (we can train our users internally to understand the significance of this and when it should be used).

I understand that this will not be a quick feature to implement, but it would make our lives so much easier. Slow hashes are exactly why we have a centrally hosted cracking machine for our users, but Hashview doesn't reflect the fact that conventional amplifiers that are applied in GPU just don't fit our use case.

If there's any info I can give regarding our workflow or setup that would help, please don't hesitate to ask.

i128 commented 6 years ago

This is an awesome idea! We've been toying around with the idea of non-cracking related tasks, that would allow for things like what you referenced above. I think the best approach for this would be to throw it on the queue as a non-cracking task, which unfortunately hashview doesnt support right now. Future plans are to revamp the hashview queue to allow for this type of operation. But it might be a few rev's out.

GrepItAll commented 6 years ago

Glad you think it's an idea worth implementing!

When you say process it as a non-cracking task, do you mean create a task that would just generate the hashes for you (the hashcat64 --stdout wordlist.txt -r rules > input.txt bit)? Or one that would go all the way from piping output from one hashcat instance to another (hashcat64 --stdout wordlist.txt -r rules | hashcat64 -m XXXX hash.txt) ? The second is what I'm hoping for, so that can I abstract the complexities away from my users! :)

Whilst I'm on the subject of feature requests:

Have you considered allowing users to rearrange the queue? When we get a new job come in, we usually 'triage' it by running it against a small unmutated dictionary (10min job), before queuing up longer jobs (5h+) if this is not successful. Being able to bump this 'triage' job to the next job in the list would be really handy, otherwise we have to wait for our current queue (which can easily last days) to finish before running this simple job, or we have to delete the queued jobs and requeue them in the new order.

Secondly: user permissions/access rights? For example, it would be useful if users could queue up new jobs without being able to delete or rearrange each others jobs (or potentially not even see details of what has been queued at all, other than the fact that 'something' is in the queue before them). A manager account or superuser could then view the entire queue and moderate it as appropriate, for example if a job is deemed to be a higher priority than normal, they could push it to the front of the queue.

I imagine that both of those are dependant on the queue changes you mentioned above and the second one would require a significant rewrite of the user model! I can move these into separate issues if you'd like (if it's not already on the Hashview roadmap). Thanks again, keep up the good work.