OWASP / D4N155

OWASP D4N155 - Intelligent and dynamic wordlist using OSINT
https://owasp.org/www-project-d4n155/
GNU General Public License v3.0
226 stars 47 forks source link

HTTPS Support #15

Closed ghost closed 4 years ago

ghost commented 4 years ago

I'm trying to scan sites with https and it's prompting the following:

1) Make wordlist tradicional                                                                                       
2) Make wordlist aggressive
D4N155%#~> 2
[ ✔ ] Gecko file exists
Target is: https://twitter.com/sm4rtk1dz                            
Time interval in seconds (Default: -1): 
 Beginning attack, with Google indexations
Finalized search to httpstwitter.comsm4rtk1dz, database
has been saved in reports/db/httpstwitter.comsm4rtk1dz.txt
Reading urls content 0-0

Traceback (most recent call last):
  File "modules/read.py", line 22, in <module>
    print(aggressive_read(target))
  File "modules/read.py", line 10, in aggressive_read
    driver.get(f'http://{url}')
  File "/home/kali/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 333, in get
    self.execute(Command.GET, {'url': url})
  File "/home/kali/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "/home/kali/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InvalidArgumentException: Message: Malformed URL: http:// is not a valid URL.

:.........................................[ ✔ ]
⣻ Make operations 
 Wordlist has been saved in
→ reports/wordlist/httpstwitter.comsm4rtk1dz.wordlist.txt                                                          
[ ✔ ] The file has been saved in
        → report-httpstwitter.html 

anyone knows how can I solve this? or if it's actually intended to scan Social media.

Jul10l1r4 commented 4 years ago

Hi, you need run:

printf "https://twitter.com/sm4rtk1dz" > some-file.txt
bash main -ba some-file.txt

-b: based FILe -a: Aggressive

Jul10l1r4 commented 4 years ago

anyone knows how can I solve this? or if it's actually intended to scan Social media.

Good issue, on twitter the selenium get a llimited data... Need scroll page for load more posts :roll_eyes:

ghost commented 4 years ago

I did ran what you said, and it made a little wordlist appending some numbers to the some-file.txt, I saw it makes a wordlist out of a wordlist. so doing the request and with some magic out of mybeautifulsoup lib combined could make it retrieve tons of passwords.

Jul10l1r4 commented 4 years ago

Sorryyy, test -t and not -b. -t for url lists in file