Closed NuHarborMartin closed 1 year ago
Implemented in dev branch, you can try.
Thank you! I've been testing this the past few days on the dev branch. It has been working but I did notice that sometimes, after resuming, it generates a very large work unit, which is much larger larger than the specified time per workunit. I am not sure if this is related to the updates in dev, but it seems to be happening when a workunit is generated for after the specified end time.
After Resume, we run a job benchmark again on selected hosts and then continue creating next workunits. Seconds per workunit should be respected, but I will try it.
But feel free to ping us on Discord (see READEME) for more details / debugging.
Heres a screenshot from my recent run, the red box is when I resumed the job. It was initially setup to use 15 min work units, which you can see in the first 3 units and worked as expected. After resuming, it seems to make arbitrary work units and the completed timing doesn't line up with what I would expect based off the keyspace.
But generally second benchmark in your job reported low speeds, it seems. So a pessimistic estimation of ideal future workunits was made.
You can try to reproduce it on multiple jobs, otherwise I think it is a bad luck (and maybe we should finally provide option to set fixed workunit size...).
Heres a screenshot from my recent run, the red box is when I resumed the job. It was initially setup to use 15 min work units, which you can see in the first 3 units and worked as expected. After resuming, it seems to make arbitrary work units and the completed timing doesn't line up with what I would expect based off the keyspace.
Which attack mode?
Attack mode is dictionary attack - NetNTLMv2 hashes (5600). This host has historically been consistently around 4 gh/s over its lifetime, so I am surprised to see low keyspace after the restart for a 15 minute work unit. I will try some more testing to see how it responds if I don't give it a start/end time on resume using the edit feature.
Edit: I am also using the new dictionary file on host feature in this attack.
Please also try "on server". On I was testing this feature with "on server" fragmentation, but hopefully it should not matter.
Request: Add the ability to resume a timed out job starting with the start index after previously finished workunit instead of 0.
Use case : A job is scheduled to run within a timeframe using the Planned Start and Planned End feature. If the entire keyspace is not completed before the Planned End time, the status is set to "Timeout" after the last workunit is completed in the time window. Currently the only option I can see is to "Restart" the job, which restarts the entire job at the start index 0, instead of continuing from on from the last completed workunit.