Closed claudioandre-br closed 7 years ago
BTW: i tried JtR with a zip containing 111210 files. No problems at all.
$ ls -lh todos.zip
-rw-r--r--. 1 claudio claudio 366M Dec 3 05:05 todos.zip
[...]
$ ../run/john ~/hash
Using default input encoding: UTF-8
Loaded 1 password hash (PKZIP [32/64])
Will run 8 OpenMP threads
Press 'q' or Ctrl-C to abort, almost any other key for status
john (todos.zip)
1g 0:00:00:00 DONE 2/3 (2016-12-03 05:08) 4.761g/s 212904p/s 212904c/s 212904C/s 123456..skyline!
Use the "--show" option to display all of the cracked passwords reliably
Session completed
https://github.com/magnumripper/JohnTheRipper/issues/2219#issuecomment-264264126 was a memory leak per-file, fixed now. I guess this issue is more about trying to keep all of a huge file in memory. So a system with enough memory or swap will handle it fine. The solution would be to memory map the file instead of alloc+fread. And perhaps some adjusting of the toHex() function so it doesn't try to allocate an output string of another twice the data size (not sure it does but I can imagine it does) but instead just prints to output stream.
Oh, we also probably have 32-bit limitations. So even a system with 128 GB memory would not handle your file, or any archive containing files larger than 2 GB or perhaps 4.
I am almost 100% sure we can get zip2john to work for any .zip file. It was poorly written, with lots of memory consumption that should not be done that way.
However, we will have 32 bit type issues for pkzip format. We may be able to work around those issues, by only pulling into memory file data blobs that are of a certain size, and if the blob is over that, we simply keep an open file handle, the offset, and size, and when we have to compute the crc32, we read from disk each time. That will allow john to 'work' for any .zip file, but on the really (REALLY) big ones (say 250gb), it will slow down quite a bit, since every checksum hit will have to read that entire blob from disk. BUT at least it would allow john to work. If someone is trying to crack something that large, then they are willing to wait the time needed (I would assume). Trying to crack something like that would usually be a something like a disk partition backed up, where they 'sort' of know the password, but trial by hand is not working, so they are using mangling rules. But allowing john to actually work, even if it is reading the disk a bunch is better than simply coring or not loading the data at all.
I think this also brings up going back to a mode where we do not store the blob in the input hash (as hex). This sorts of rolls back the large hash handling mode, but possibly in extreme situations, we really should roll that behavior back. @magnumripper can you comment on this thought process please.
I think we should just change all code to using 64-bit fseek/ftell and size_t/off_t for any sizes. That should mean we can handle any size as long as we're a 64-bit build. A 32-bit build will have 32-bit limitations and frankly I couldn't care less! I hear the hashcat team already discussing dropping support for 32-bit! I definitely think we should keep supporting 32-bit, but there's no point jumping through hoops for edge cases.
Second, we should change the toHex() function to output it's data to a stream (ie. stdout) instead of allocating and returning a buffer twice the size of input data. No alloc needed at all.
Finally, for extreme cases we could simply use a memory map of the input file instead of allocating/freeing any buffer at all. That's much simpler to code, and ends up working optimally for systems with very very much or very very little memory - the virtual memory handler takes care of everything. Having said that, I doubt it's needed. I haven't ever seen a zip file that large. Most huge archives seem to be RAR.
I see no need for reverting to non-blob. Why would we do that? Yes, a 4 GB file may end up an 8 GB input file. We could do a little better using Base-64 but it will still be 5.3 GB. But that will be very rare (eg. the smallest file in an archive is that big) and it will work just fine on a 64-bit build anyway.
I see no need for reverting to non-blob.
What I can see coming, sooner or later, is some alternate, binary, input format. Or btw even better, DROP the need for zip2john (et al) and let you do things like ./john archive.zip
. To support that we'd need to add something(s) to the format interface.
One thing we'd need to "solve" is what the heck a pot entry would be for that!? For non-hashes, I bet we could get away with something like
$zip2$<SHA-256_of_input file>:plaintext
I think this issue may still exist in 1.8.0-jumbo-1.
When I run zip2john all works as expected and a 4G hash file is created:
$ ./zip2john encrypted.zip > encrypted.hash
ver 2.0 encrypted.zip->017 HD-TS x264-CPG.mkv PKZIP Encr: cmplen=2142276603, decmplen=2143196027, crc=513DE5A7
$ ls -l encrypted.hash
-rw-rw-r--. 1 ddreggors ddreggors 4284553336 May 12 13:48 encrypted.hash
However running john against this file cause a core dump with invalid pointer:
$ ./john --show encrypted.hash
*** Error in `./john': munmap_chunk(): invalid pointer: 0x00007fef6c0e6010 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x791fb)[0x7ff282c4b1fb]
/lib64/libc.so.6(cfree+0x1f8)[0x7ff282c58468]
./john[0x66dd59]
./john[0x66bc53]
./john[0x669760]
./john[0x66a354]
./john[0x66b2ae]
/lib64/libc.so.6(__libc_start_main+0xf1)[0x7ff282bf2401]
./john[0x405a5a]
======= Memory map: ========
00400000-00857000 r-xp 00000000 fd:00 56235484 /home/ddreggors/JohnTheRipper-bleeding-jumbo/run/john
00a56000-00a57000 r--p 00456000 fd:00 56235484 /home/ddreggors/JohnTheRipper-bleeding-jumbo/run/john
00a57000-00ad6000 rw-p 00457000 fd:00 56235484 /home/ddreggors/JohnTheRipper-bleeding-jumbo/run/john
00ad6000-00d28000 rw-p 00000000 00:00 0
011a7000-01527000 rw-p 00000000 00:00 0 [heap]
7fef6c0e6000-7ff06b6f8000 rw-p 00000000 00:00 0
7ff06b6f8000-7ff16b2f9000 rw-p 00000000 00:00 0
7ff26a7ee000-7ff26a804000 r-xp 00000000 fd:00 42468462 /usr/lib64/libgcc_s-6.3.1-20161221.so.1
7ff26a804000-7ff26aa03000 ---p 00016000 fd:00 42468462 /usr/lib64/libgcc_s-6.3.1-20161221.so.1
7ff26aa03000-7ff26aa04000 r--p 00015000 fd:00 42468462 /usr/lib64/libgcc_s-6.3.1-20161221.so.1
7ff26aa04000-7ff26aa05000 rw-p 00016000 fd:00 42468462 /usr/lib64/libgcc_s-6.3.1-20161221.so.1
7ff26aa05000-7ff27aa06000 rw-p 00000000 00:00 0
7ff27aa06000-7ff28159f000 r--p 00000000 fd:00 42468883 /usr/lib/locale/locale-archive
7ff28159f000-7ff281611000 r-xp 00000000 fd:00 42473146 /usr/lib64/libpcre.so.1.2.8
7ff281611000-7ff281810000 ---p 00072000 fd:00 42473146 /usr/lib64/libpcre.so.1.2.8
7ff281810000-7ff281811000 r--p 00071000 fd:00 42473146 /usr/lib64/libpcre.so.1.2.8
7ff281811000-7ff281812000 rw-p 00072000 fd:00 42473146 /usr/lib64/libpcre.so.1.2.8
7ff281812000-7ff281835000 r-xp 00000000 fd:00 42475775 /usr/lib64/libselinux.so.1
7ff281835000-7ff281a35000 ---p 00023000 fd:00 42475775 /usr/lib64/libselinux.so.1
7ff281a35000-7ff281a36000 r--p 00023000 fd:00 42475775 /usr/lib64/libselinux.so.1
7ff281a36000-7ff281a37000 rw-p 00024000 fd:00 42475775 /usr/lib64/libselinux.so.1
7ff281a37000-7ff281a39000 rw-p 00000000 00:00 0
7ff281a39000-7ff281a50000 r-xp 00000000 fd:00 42475389 /usr/lib64/libresolv-2.24.so
7ff281a50000-7ff281c50000 ---p 00017000 fd:00 42475389 /usr/lib64/libresolv-2.24.so
7ff281c50000-7ff281c51000 r--p 00017000 fd:00 42475389 /usr/lib64/libresolv-2.24.so
7ff281c51000-7ff281c52000 rw-p 00018000 fd:00 42475389 /usr/lib64/libresolv-2.24.so
7ff281c52000-7ff281c54000 rw-p 00000000 00:00 0
7ff281c54000-7ff281c57000 r-xp 00000000 fd:00 42476407 /usr/lib64/libkeyutils.so.1.5
7ff281c57000-7ff281e56000 ---p 00003000 fd:00 42476407 /usr/lib64/libkeyutils.so.1.5
7ff281e56000-7ff281e57000 r--p 00002000 fd:00 42476407 /usr/lib64/libkeyutils.so.1.5
7ff281e57000-7ff281e58000 rw-p 00000000 00:00 0
7ff281e58000-7ff281e65000 r-xp 00000000 fd:00 42475218 /usr/lib64/libkrb5support.so.0.1
7ff281e65000-7ff282065000 ---p 0000d000 fd:00 42475218 /usr/lib64/libkrb5support.so.0.1
7ff282065000-7ff282066000 r--p 0000d000 fd:00 42475218 /usr/lib64/libkrb5support.so.0.1
7ff282066000-7ff282067000 rw-p 0000e000 fd:00 42475218 /usr/lib64/libkrb5support.so.0.1
7ff282067000-7ff282069000 r-xp 00000000 fd:00 42472415 /usr/lib64/libfreebl3.so
7ff282069000-7ff282268000 ---p 00002000 fd:00 42472415 /usr/lib64/libfreebl3.so
7ff282268000-7ff282269000 r--p 00001000 fd:00 42472415 /usr/lib64/libfreebl3.so
7ff282269000-7ff28226a000 rw-p 00002000 fd:00 42472415 /usr/lib64/libfreebl3.so
7ff28226a000-7ff282298000 r-xp 00000000 fd:00 42475209 /usr/lib64/libk5crypto.so.3.1
7ff282298000-7ff282498000 ---p 0002e000 fd:00 42475209 /usr/lib64/libk5crypto.so.3.1
7ff282498000-7ff28249a000 r--p 0002e000 fd:00 42475209 /usr/lib64/libk5crypto.so.3.1
7ff28249a000-7ff28249b000 rw-p 00030000 fd:00 42475209 /usr/lib64/libk5crypto.so.3.1
7ff28249b000-7ff28249e000 r-xp 00000000 fd:00 42474893 /usr/lib64/libcom_err.so.2.1
7ff28249e000-7ff28269d000 ---p 00003000 fd:00 42474893 /usr/lib64/libcom_err.so.2.1
7ff28269d000-7ff28269e000 r--p 00002000 fd:00 42474893 /usr/lib64/libcom_err.so.2.1
7ff28269e000-7ff28269f000 rw-p 00003000 fd:00 42474893 /usr/lib64/libcom_err.so.2.1
7ff28269f000-7ff282774000 r-xp 00000000 fd:00 42475217 /usr/lib64/libkrb5.so.3.3
7ff282774000-7ff282974000 ---p 000d5000 fd:00 42475217 /usr/lib64/libkrb5.so.3.3
7ff282974000-7ff282983000 r--p 000d5000 fd:00 42475217 /usr/lib64/libkrb5.so.3.3
7ff282983000-7ff282985000 rw-p 000e4000 fd:00 42475217 /usr/lib64/libkrb5.so.3.3
7ff282985000-7ff2829cf000 r-xp 00000000 fd:00 42475092 /usr/lib64/libgssapi_krb5.so.2.2
7ff2829cf000-7ff282bcf000 ---p 0004a000 fd:00 42475092 /usr/lib64/libgssapi_krb5.so.2.2
7ff282bcf000-7ff282bd1000 r--p 0004a000 fd:00 42475092 /usr/lib64/libgssapi_krb5.so.2.2
7ff282bd1000-7ff282bd2000 rw-p 0004c000 fd:00 42475092 /usr/lib64/libgssapi_krb5.so.2.2
7ff282bd2000-7ff282d8f000 r-xp 00000000 fd:00 42474775 /usr/lib64/libc-2.24.so
7ff282d8f000-7ff282f8e000 ---p 001bd000 fd:00 42474775 /usr/lib64/libc-2.24.so
7ff282f8e000-7ff282f92000 r--p 001bc000 fd:00 42474775 /usr/lib64/libc-2.24.so
7ff282f92000-7ff282f94000 rw-p 001c0000 fd:00 42474775 /usr/lib64/libc-2.24.so
7ff282f94000-7ff282f98000 rw-p 00000000 00:00 0
7ff282f98000-7ff282fb0000 r-xp 00000000 fd:00 42475317 /usr/lib64/libpthread-2.24.so
7ff282fb0000-7ff2831b0000 ---p 00018000 fd:00 42475317 /usr/lib64/libpthread-2.24.so
7ff2831b0000-7ff2831b1000 r--p 00018000 fd:00 42475317 /usr/lib64/libpthread-2.24.so
7ff2831b1000-7ff2831b2000 rw-p 00019000 fd:00 42475317 /usr/lib64/libpthread-2.24.so
7ff2831b2000-7ff2831b6000 rw-p 00000000 00:00 0
7ff2831b6000-7ff2831e3000 r-xp 00000000 fd:00 42475074 /usr/lib64/libgomp.so.1.0.0
7ff2831e3000-7ff2833e2000 ---p 0002d000 fd:00 42475074 /usr/lib64/libgomp.so.1.0.0
7ff2833e2000-7ff2833e3000 r--p 0002c000 fd:00 42475074 /usr/lib64/libgomp.so.1.0.0
7ff2833e3000-7ff2833e4000 rw-p 0002d000 fd:00 42475074 /usr/lib64/libgomp.so.1.0.0
7ff2833e4000-7ff2833eb000 r-xp 00000000 fd:00 42477893 /usr/lib64/libcrypt-nss-2.24.so
7ff2833eb000-7ff2835ea000 ---p 00007000 fd:00 42477893 /usr/lib64/libcrypt-nss-2.24.so
7ff2835ea000-7ff2835eb000 r--p 00006000 fd:00 42477893 /usr/lib64/libcrypt-nss-2.24.so
7ff2835eb000-7ff2835ec000 rw-p 00007000 fd:00 42477893 /usr/lib64/libcrypt-nss-2.24.so
7ff2835ec000-7ff28361a000 rw-p 00000000 00:00 0
7ff28361a000-7ff28361d000 r-xp 00000000 fd:00 42474872 /usr/lib64/libdl-2.24.so
7ff28361d000-7ff28381c000 ---p 00003000 fd:00 42474872 /usr/lib64/libdl-2.24.so
7ff28381c000-7ff28381d000 r--p 00002000 fd:00 42474872 /usr/lib64/libdl-2.24.so
7ff28381d000-7ff28381e000 rw-p 00003000 fd:00 42474872 /usr/lib64/libdl-2.24.so
7ff28381e000-7ff283833000 r-xp 00000000 fd:00 42476920 /usr/lib64/libz.so.1.2.8
7ff283833000-7ff283a32000 ---p 00015000 fd:00 42476920 /usr/lib64/libz.so.1.2.8
7ff283a32000-7ff283a33000 r--p 00014000 fd:00 42476920 /usr/lib64/libz.so.1.2.8
7ff283a33000-7ff283a34000 rw-p 00015000 fd:00 42476920 /usr/lib64/libz.so.1.2.8
7ff283a34000-7ff283b3c000 r-xp 00000000 fd:00 42474928 /usr/lib64/libm-2.24.so
7ff283b3c000-7ff283d3b000 ---p 00108000 fd:00 42474928 /usr/lib64/libm-2.24.so
7ff283d3b000-7ff283d3c000 r--p 00107000 fd:00 42474928 /usr/lib64/libm-2.24.so
7ff283d3c000-7ff283d3d000 rw-p 00108000 fd:00 42474928 /usr/lib64/libm-2.24.so
7ff283d3d000-7ff283dce000 r-xp 00000000 fd:00 42476250 /usr/lib64/libgmp.so.10.3.1
7ff283dce000-7ff283fce000 ---p 00091000 fd:00 42476250 /usr/lib64/libgmp.so.10.3.1
7ff283fce000-7ff283fcf000 r--p 00091000 fd:00 42476250 /usr/lib64/libgmp.so.10.3.1
7ff283fcf000-7ff283fd0000 rw-p 00092000 fd:00 42476250 /usr/lib64/libgmp.so.10.3.1
7ff283fd0000-7ff284205000 r-xp 00000000 fd:00 42474636 /usr/lib64/libcrypto.so.1.0.2k
7ff284205000-7ff284404000 ---p 00235000 fd:00 42474636 /usr/lib64/libcrypto.so.1.0.2k
7ff284404000-7ff284420000 r--p 00234000 fd:00 42474636 /usr/lib64/libcrypto.so.1.0.2k
7ff284420000-7ff28442d000 rw-p 00250000 fd:00 42474636 /usr/lib64/libcrypto.so.1.0.2k
7ff28442d000-7ff284431000 rw-p 00000000 00:00 0
7ff284431000-7ff284499000 r-xp 00000000 fd:00 42474904 /usr/lib64/libssl.so.1.0.2k
7ff284499000-7ff284698000 ---p 00068000 fd:00 42474904 /usr/lib64/libssl.so.1.0.2k
7ff284698000-7ff28469c000 r--p 00067000 fd:00 42474904 /usr/lib64/libssl.so.1.0.2k
7ff28469c000-7ff2846a3000 rw-p 0006b000 fd:00 42474904 /usr/lib64/libssl.so.1.0.2k
7ff2846a3000-7ff2846c8000 r-xp 00000000 fd:00 42477772 /usr/lib64/ld-2.24.so
7ff28474e000-7ff2848aa000 rw-p 00000000 00:00 0
7ff2848c5000-7ff2848c8000 rw-p 00000000 00:00 0
7ff2848c8000-7ff2848c9000 r--p 00025000 fd:00 42477772 /usr/lib64/ld-2.24.so
7ff2848c9000-7ff2848ca000 rw-p 00026000 fd:00 42477772 /usr/lib64/ld-2.24.so
7ff2848ca000-7ff2848cb000 rw-p 00000000 00:00 0
7ffc668fa000-7ffc6691c000 rw-p 00000000 00:00 0 [stack]
7ffc669df000-7ffc669e1000 r--p 00000000 00:00 0 [vvar]
7ffc669e1000-7ffc669e3000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Aborted (core dumped)
Folowing up to that last comment I would like to add that Fedora has john (not jumbo) in yum repos. Installing it and using john against the previous hash file yields this:
$ john --show ./encrypted.hash
ecd67fb84e4f2c754613624bcf25b7011f97fbaba650272d26c54e321539701c20e1ad3f9f1adecd82e9d3d40e84e846604688ea9dd3aa8d77bff79e3cb5fe452b094*$/pkzip2$:NO PASSWORD::::encrypted.zip
Since this zip is encrypted and has a password, zip2john from jumbo seems to have failed
I think this issue may still exist in 1.8.0-jumbo-1.
Jumbo-1 is like three years old and we are literally several thousands of commits past it. Did you mean bleeding Jumbo (as in latest Github version)?
This issue should be fixed since last week or so. Needs testing.
I am sorry yes... the bleeding was the one I meant.
$ ./john --list=build-info |head -n 1
Version: 1.8.0.9-jumbo-1-bleeding
This (1.8.0.9-jumbo-1) is what I was using in the above not 1.8.0-jumbo-1.
This issue should be fixed since last week or so. Needs testing.
It was not as of two days ago, that is when I downloaded the version I was using to show these errors.
Confirmed
$ ./john --show ../../encrypted.hash
0 password hashes cracked, 1 left
Still does not crack password but at least it did not core dump and it actually says 0 cracked, 1 left :+1:
I'm not sure but I think this problem still persits. I used latest jumbo bleeding release and the zip2john is just producing a several gigabytes large hash file. is there anything I need to change on code or my machine? the zip file is 8gb large and is containing one file with 15gb.
Unfortunately an 8 GB file will produce 16 GB worth of hex output and you will need a lot of memory (and preferably a lot of swap as well) to load it. Also, even if you have 64 GB of memory that 'hash' will be slow as hell to attack since we'll need to calculate a CRC on 8 GB of data for each candidate tried (unless we can early reject some of them).
Well if you do have a lot of memory and the password isn't too strong, it should be possible for sure. In my opinion, a modern computer shouldn't have less than 32 GB of memory.
I was just going to suggest you running zipinfo
on the archive. This looks pretty good, I think. The fact zip2john picked a larger file instead is kind of a bug.
So a false positive, too bad. Perhaps you used the -c
option in some invalid way. What's the output of zipinfo Mailbackup.zip
?
OK, then I believe you'll simply need a lot of memory. However, if you can confirm the cracking speed ends up to be very poor (compared to a benchmark of pkzip format), we could probably mitigate that by implementing file magic support for .pst
files.
thanks a lot for your support. I will try than with a propper machine. what would you suggest as hardware parameters?
@magnumripper Besides or instead of support for specific file types, we could detect low entropy or long sequences of 0's (or any same byte values, for that matter) for any large files (where CRC calculation would be expected to be slower). The problem is in what we do when we do not see low entropy (like we won't most of the time, because the tested password will be wrong) - do we calculate CRC anyway? If so, there won't be any speedup. Perhaps we need an option (exposed to the user) that would disable CRC checks and replace them with entropy check of first N bytes (parameter to that option). This could even be a zip2john
option (rather than a john
one, even though it primarily affects john
), so that we won't have to have a format-specific option in john
and we'd also avoid the "non-hash" file size and RAM requirement issue.
@martinlang83 You could also consider known-plaintext attack if you need the decrypted data rather than the password:
thank you @magnumripper helped me to use it propper. I now have a 8GB hash file. I run john against this hash file. it is quite slow more or less 60000p/s.
@solardiz thanks a lot but I dont quite understand the plaintext attack thing. I know that in my zip there is one large PST file (MS Outlook). so i dont think pst is a plain text format?!
@martinlang83 We're already abusing this GitHub issue too much instead of discussing the JtR aspects on john-users, and the known-plaintext attack is nearly off-topic for both, but you can read about such attacks in general here:
https://en.wikipedia.org/wiki/Known-plaintext_attack
For pkzip encryption in particular, attack is possible when you know (or guess) at least first 11 bytes of plaintext - and plaintext does not mean plain text, it means original unencrypted data. For many file formats, first 11+ bytes are part of the file format specific header and are not specific to the file's contents, so can be taken from another file of the same format.
I now have a 8GB hash file. I run john against this hash file. it is quite slow more or less 60000p/s.
I just now committed file magic support for zip2john (use it by simply adding -m
to your zip2john command line), but in my tests with a several GB file, it made almost no difference - I got over 38Mp/s even without it (due to our excellent early rejection of invalid Huffman data). My guess is the 60Kp/s you were seeing will soon increase with several orders of magnitude (there's a long start-up time that skews the initial statistics)
Very Cool. thanks a lot! Do you use any special commands? I'm not getting this high performance. I get 36Mps on a 16core machine. so each core about 2MP/s.
this seems still very slow :-(
Am Mo., 2. März 2020 um 23:00 Uhr schrieb magnum notifications@github.com:
I now have a 8GB hash file. I run john against this hash file. it is quite slow more or less 60000p/s.
I just now committed file magic support for zip2john (use it by simply adding -m to your zip2john command line), but in my tests with a several GB file, it made almost no difference - I got over 38Mp/s even without it (due to our excellent early rejection of invalid Huffman data). My guess is the 60Kp/s you were seeing will soon increase with several orders of magnitude (there's a long start-up time that skews the initial statistics)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/magnumripper/JohnTheRipper/issues/2352?email_source=notifications&email_token=ADNF3E5DSTIWBBRRSTBCAKTRFQUAHA5CNFSM4CYMUREKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENRE63Y#issuecomment-593645423, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADNF3E2SCU2VK6UWYHIMXCLRFQUAHANCNFSM4CYMUREA .
I get 36Mps on a 16core machine. so each core about 2MP/s.
Is this with --fork=16
? Our PKZIP format isn't OpenMP-capable, and needs fork.
Yes with fork 16
Solar Designer notifications@github.com schrieb am Di. 3. März 2020 um 20:30:
I get 36Mps on a 16core machine. so each core about 2MP/s.
Is this with --fork=16? Our PKZIP format isn't OpenMP-capable, and needs fork.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/magnumripper/JohnTheRipper/issues/2352?email_source=notifications&email_token=ADNF3EZC5XVR7DY4XJMJCBTRFVLE7A5CNFSM4CYMUREKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENU2OZQ#issuecomment-594126694, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADNF3E7QFXFGTYFOGGA5IWLRFVLE7ANCNFSM4CYMUREA .
Our PKZIP format isn't OpenMP-capable, and needs fork.
Yes the PKZIP format is OpenMP-capable. It scales fair enough, but fork may be faster. The speed I reported was using 16 threads (8 real cores) IIRC.
See https://github.com/magnumripper/JohnTheRipper/issues/2219#issuecomment-264264126
I created the simple test case seen below (on well).
zip2john handles zero.zip but can't handle fzero.zip.