f0cker / crackq

CrackQ: A Python Hashcat cracking queue system
MIT License
923 stars 100 forks source link

Task timeoute #16

Closed jllang763 closed 3 years ago

jllang763 commented 4 years ago

I was running a brute-force on a 9-char NTLM hash, mainly because I could and the cracker was not need for other work at the time. According to CrackQ it was going to take 20 days to complete, which was fine. I check the task this morning and it had failed with the message

Task exceeded maximum timeout value (1209600 seconds)

Where is this timeout controlled?

f0cker commented 4 years ago

The timeout is set statically here: https://github.com/f0cker/crackq/blob/master/crackq/crackqueue.py#L54

I can move this to the config file though. Would you prefer it there or in the 'Add Job' form?

jllang763 commented 4 years ago

Why have a timeout value at all?

On Mon, Jul 13, 2020 at 11:08 AM f0cker notifications@github.com wrote:

The timeout is set statically here: https://github.com/f0cker/crackq/blob/master/crackq/crackqueue.py#L54

I can move this to the config file though. Would you prefer it there or in the 'Add Job' form?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-657649813, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZTITBCMCNSRI52BT2DR3MWQJANCNFSM4OYS6RUA .

f0cker commented 4 years ago

It was so that one person or job doesn't hog the queue more than anything, but it doesn't necessarily need to be there. Are you able to recover your job? If you can't, we can probably make it restore from where you left off with a little bit of work.

jllang763 commented 4 years ago

Well, I understand that reason it may be better to have it as an optional value in the job submit API. I also instead of failing the job, it should just move the completed list with a message about the time out. I am unable to "restore" the job because it errors out.

On Mon, Jul 13, 2020 at 12:06 PM f0cker notifications@github.com wrote:

It was so that one person or job doesn't hog the queue more than anything, but it doesn't necessarily need to be there. Are you able to recover your job? If you can't, we can probably make it restore from where you left off with a little bit of work.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-657679524, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZRMR5LXORPDK6SHBTDR3M5JBANCNFSM4OYS6RUA .

f0cker commented 4 years ago

OK. I think the restore will fail because it's in the failed queue, it's something I will change at some point. If you wanted to restore your job for now though, you could start a new job with the same parameters then stop it, and then you can just copy the restore value from the old job to the new one from the host cli. You just need the job ID for the old job then under /var/crackq/logs/\<job ID>.json copy the "Restore Point" value from the JSON element over to the new job ID (.json) file. That should work.

jllang763 commented 4 years ago

I tried that but it still failed to restore the job. Now the job is in the failed list with the message "No hashes loaded".

On Mon, Jul 13, 2020 at 1:46 PM f0cker notifications@github.com wrote:

OK. I think the restore will fail because it's in the failed queue, it's something I will change at some point. If you wanted to restore your job for now though, you could start a new job with the same parameters then stop it, and then you can just copy the restore value from the old job to the new one from the host cli. You just need the job ID for the old job then under /var/crackq/logs/.json copy the "Restore Point" value from the JSON element over to the new job ID (.json) file. That should work.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-657728456, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZWIJKKQXGDG7LYQG5DR3NJCBANCNFSM4OYS6RUA .

f0cker commented 4 years ago

Oh sorry about that. Did you stop it immediately? Maybe try letting the new job run a little before stopping it if so.

jllang763 commented 4 years ago

I let it get started before pausing it.

On Mon, Jul 13, 2020 at 2:55 PM f0cker notifications@github.com wrote:

Oh sorry about that. Did you stop it immediately? Maybe try letting the new job run a little before stopping it if so.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-657759457, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZWPU3ZHSQHRV7RU4ODR3NRDZANCNFSM4OYS6RUA .

f0cker commented 4 years ago

Argh, perhaps we can't restore it then. I mean, it should be possible, but it's likely not worth the hassle doing it over this channel. Which version are you using, is it master or the add_inapp_auth_admin branch? I tweaked some of the restore settings in the newer branch, but I'm not sure if they will help here. Do you see any errors in the crackq.log or stdout with debugging enabled? Or if you don't want to waste any more time on it that's fine. I'll add the timeout changes to the roadmap to be added in the next few changes.

jllang763 commented 4 years ago

I do not see any errors in the debug logs. This system is running the Master branch. Being able to pause and restart jobs would be very useful to me. What is the plan for the "timeout changes"?

f0cker commented 4 years ago

Yeah pause/restore works OK in most cases, but I noticed it was failing to restore sometimes so there are a couple of things that need ironing out. One is that it won't restore from the failed queue (only from the complete queue), and there were a couple of other times it's failed for me. One problem I believe I fixed was when the job is stopped before it has a chance to write the json file (while hashcat is still initializing). There was also another issue in the GUI I fixed in the latest branch. There may be more but I wasn't able to reproduce any other cases.

So there are no errors in the logs when you try to restore the original job and also the new job? and debugging is enabled? There should be some indication there. Were the jobs definitely identical? I'll try to reproduce the scenario on my dev box.

The plan for the timeout change is to make it default to 21 days, but have this as a setting in the config file. However, I will also add an option in the configuration file to make this a user option when adding a new job. So if the administrator wants to enforce a timeout to prevent users hogging the queue then they can, but also if they don't care about that they leave it up to the user. How does that sound to you?

jllang763 commented 4 years ago

The pause/restore has not work for me at all. I get an error but it creates the job that fails with a message "No hashes loaded". Yes, I have debug on. I will try it again and see if I get error messages.

The plan for the timeout change sounds good to me.

On Tue, Jul 14, 2020 at 9:52 AM f0cker notifications@github.com wrote:

Yeah pause/restore works OK in most cases, but I noticed it was failing to restore sometimes so there are a couple of things that need ironing out. One is that it won't restore from the failed queue (only from the complete queue), and there were a couple of other times it's failed for me. One problem I believe I fixed was when the job is stopped before it has a chance to write the json file (while hashcat is still initializing). There was also another issue in the GUI I fixed in the latest branch. There may be more but I wasn't able to reproduce any other cases.

So there are no errors in the logs when you try to restore the original job and also the new job? and debugging is enabled? There should be some indication there. Were the jobs definitely identical? I'll try to reproduce the scenario on my dev box.

The plan for the timeout change is to make it default to 21 days, but have this as a setting in the config file. However, I will also add an option in the configuration file to make this a user option when adding a new job. So if the administrator wants to enforce a timeout to prevent users hogging the queue then they can, but also if they don't care about that they leave it up to the user. How does that sound to you?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-658226400, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZU6Q3QYFCIW7VCW46TR3RWJ3ANCNFSM4OYS6RUA .

f0cker commented 4 years ago

Ah I think I know what it is, copy \<job-id>.hashes from the old job ID to a new file with the new job ID. Also check the permissions are the same.

jllang763 commented 4 years ago

Repeated all the steps with a new job and copying in the .hashes file. When restarting the new job, I got an error message but nothing in the logs. The job started and the completed a few minutes later with a 0 runtime and "All" hashes cracked, which is not correct.

On Tue, Jul 14, 2020 at 10:30 AM f0cker notifications@github.com wrote:

Ah I think I know what it is, copy .hashes from the old job ID to a new file with the new job ID.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-658247829, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZSKD4TGECSWTT434D3R3R2ZFANCNFSM4OYS6RUA .

f0cker commented 4 years ago

That error message is the issue I fixed in the GUI where is was showing an error even though the job restored correctly, so you can ignore that one. When it shows 'All' under cracked that's supposed to be when all hashes were already present in the pot file. I'm not sure why that's triggering for this job. I'll see if I can figure it out.

jllang763 commented 4 years ago

Ok, thanks

On Tue, Jul 14, 2020 at 11:48 AM f0cker notifications@github.com wrote:

That error message is the issue I fixed in the GUI where is was showing an error even though the job restored correctly, so you can ignore that one. When it shows 'All' under cracked that's supposed to be when all hashes were already present in the pot file. I'm not sure why that's triggering for this job. I'll see if I can figure it out.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/f0cker/crackq/issues/16#issuecomment-658290656, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMEFZVHZ7TS2XC4LTTWF4LR3SD5NANCNFSM4OYS6RUA .

f0cker commented 3 years ago

I've added a slider to the job creation modal, which allows you to set the job timeout in hours if allowed by admin in the config file. I'm open to changing this to a calendar or something else if that's preferred, but closing this for now.