Closed ZerkerEOD closed 3 months ago
Hello @ZerkerEOD, a couple of things on what you showed :
ps
output shows only a single task running, your 2 processes are respectively
sh -c [...]
: the call to sh
made by the hashtopolis agent, which isn't running hashcat, just waiting for the command to finish./hashcat.bin [...]
: the actual process cracking stuff
if you call it again using ps auxf
, for example, you'll see your first process is just the call made by sh
to haschatFrom what I can see, everything works as intended in what you showed, and if you wanna make sure, simply run the same commande as called by sh
in your screenshot instead of benchmarking, to get your actual speed for the task at hand, not the best case scenario for a brute-force attack.
Another way to check would be through the agent log itself, it should display its current speed on stdout / its default log, depending on your setup, allowing you to confirm the agent sees the same cracking speed, and the check through my previous suggestion that the slowdown is expected, and due to the combination of wordlist and rules instead of BF.
Thanks for explaining that. Is there a reason that the agent doesn't run hashtopolis on it's own rather than spawning a process to spawn the process?
@frenchbeard thanks for the help with the detailed explanation!
@ZerkerEOD That is just the way it is currently implemented, through the subprocesses library of python.
Version Information
Hashtopolis: 0.14.2 commit d397e4b
Hashcat
6.2.6+813
Description
When running from Debian, it appears that the agent is starting two hashcat jobs reducing performance speed by over half with some overhead.
Benchmark run from the same binary hashtopolis placed on the box without hashtopolis running:
Here is a ps aux for hashcat with hashtopolis running (note it is running with a
/bin/sh -c
and just a./hashcat.bin
:Here is the reported speed in hashtopolis: