LandGrey / pydictor

A powerful and useful hacker dictionary builder for a brute-force attack
https://github.com/LandGrey/pydictor
GNU General Public License v3.0
3.22k stars 630 forks source link

Can pydictor find and remove non-UTF-8 characters? #34

Open Privacy6484847 opened 2 years ago

Privacy6484847 commented 2 years ago

Can pydictor find and remove non-UTF-8 characters?

LandGrey commented 2 years ago

Can pydictor find and remove non-UTF-8 characters?

Currently not contains this function, you can use other tool to finish it, such as iconv

Privacy6484847 commented 2 years ago

Thanks for answer.

I applied the iconv tool with this command iconv -f utf-8 -t utf-8 -c test.txt -o clean_test.txt. Then i used pydictor on the clean_test.txt with: pydictor --len 6 20 -tool handler clean_test.txt -o super_clean_test.txt

But i got the following error:

File "E:\pydictor-2.0.5\pydictor.py", line 107, in <module> tool_parser() 
File "E:\pydictor-2.0.5\lib\parse\argsparse.py", line 104, in tool_parser get_handler_dic(pyoptions.args_tool[1]) 
File "E:\pydictor-2.0.5\tools\handler.py", line 24, in get_handler_dic for item in f.readlines(): 
File "C:\Python310\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0]
 UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 3858: character maps to <undefined>
LandGrey commented 2 years ago

Thanks for answer.

I applied the iconv tool with this command iconv -f utf-8 -t utf-8 -c test.txt -o clean_test.txt. Then i used pydictor on the clean_test.txt with: pydictor --len 6 20 -tool handler clean_test.txt -o super_clean_test.txt

But i got the following error:

File "E:\pydictor-2.0.5\pydictor.py", line 107, in <module> tool_parser() 
File "E:\pydictor-2.0.5\lib\parse\argsparse.py", line 104, in tool_parser get_handler_dic(pyoptions.args_tool[1]) 
File "E:\pydictor-2.0.5\tools\handler.py", line 24, in get_handler_dic for item in f.readlines(): 
File "C:\Python310\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0]
 UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 3858: character maps to <undefined>

I add the filter printable character tool for pydictor just now. you can download the latest pydictor version (2.1.5.6) and using command python pydictor.py --len 6 20 -tool printabler test.txt to get your wordlist.

Privacy6484847 commented 2 years ago

Thank you for your help. Unfortunately the issue is still occuring. You can test it with this file i found online: https://www.w3.org/2001/06/utf-8-wrong/UTF-8-test.html

LandGrey commented 2 years ago

Thank you for your help. Unfortunately the issue is still occuring. You can test it with this file i found online: https://www.w3.org/2001/06/utf-8-wrong/UTF-8-test.html

I fixed the bug. Please download pydictor 2.1.5.7 version and using command python pydictor.py --len 6 20 -tool printabler test.txt.

Privacy6484847 commented 2 years ago

Thank you for your effort. Unfortunately i got the same error.

Untitled

LandGrey commented 2 years ago

2.1.5.7

try 2.1.5.8 version.

Privacy6484847 commented 2 years ago

You did it! Amazing!! Thank you so so much!! πŸ’―

image

Privacy6484847 commented 2 years ago

Update: Now the pydictor works well with non UTF-8 characters. The issue i got now is that i hit a memory limit.

image

LandGrey commented 2 years ago

Update: Now the pydictor works well with non UTF-8 characters. The issue i got now is that i hit a memory limit.

image

try latest version 2.1.6.0.

Privacy6484847 commented 2 years ago

I tried the 2.1.6.0 version and i got a different behavior. But the same end result. The Memory behavior changed. Instead of a straight shoot up i got a slower and steady rise for 5 min then it eventually gave the memory error.

During the process:

image

At the end of the process:

image

image

LandGrey commented 2 years ago

I tried the 2.1.6.0 version and i got a different behavior. But the same end result. The Memory behavior changed. Instead of a straight shoot up i got a slower and steady rise for 5 min then it eventually gave the memory error.

During the process:

image

At the end of the process:

image

image

That's due to "memory remove duplicate file lines by preserving order" caused. Your input files must be huge, I would to consider a better way to fix it.

LandGrey commented 2 years ago

I tried the 2.1.6.0 version and i got a different behavior. But the same end result. The Memory behavior changed. Instead of a straight shoot up i got a slower and steady rise for 5 min then it eventually gave the memory error.

During the process:

image

At the end of the process:

image

image

maybe you can try version 2.1.7.0.

Privacy6484847 commented 2 years ago

Unfortunately the same behavior. For info the file size of this dictionnary is 400GB.

image

image

LandGrey commented 2 years ago

Download the latest version 2.1.7.1, I reduce the pydictor lib/data/data.py file pyoptions.memory_unique_max_lines_count variable to 10000000, just try again. If the same behavior, you can reduce the variable and try it again.

Unfortunately the same behavior. For info the file size of this dictionnary is 400GB.

image

image

Download the latest version 2.1.7.1, I reduce the pydictor lib/data/data.py file pyoptions.memory_unique_max_lines_count variable to 10000000, just try again. If the same behavior, you can reduce the variable and try it again.

Privacy6484847 commented 2 years ago

OK. I'm experimenting playing with the variables. I would like to ask you if there is a way for pydictor to show the progress in %, or in lines processed. Any indication of the progress made by Pydictor would be very helpful for troubleshooting and also benchmarking the performance effect of different variables :)

Privacy6484847 commented 2 years ago

So after a lot of testing. Reducing this variable pyoptions.memory_unique_max_lines_count lowers performance dramatically and it just makes the error take more time to happen. I actually even tried with a relatively small file of 30gb. And i gave windows 60GB of paging file size. Still Pydictor ended up in memory error. Maybe instead of using the paging file, pydictor should save incrementally the data processed directly to the hard drive. :)

tl123987 commented 2 years ago

可δ»₯加δΈͺι™εˆΆε­—ε…Έη”Ÿζˆθ‘Œζ•°ηš„εŠŸθƒ½δΈ

tl123987 commented 2 years ago

图像

θ‡ͺε·±ε†™θ§„εˆ™ε‡ΊηŽ°ι”™θ――ε’‹ε›žδΊ‹ε“‡οΌŒδ½œθ€