six2dez / reconftw

reconFTW is a tool designed to perform automated recon on a target domain by running the best set of tools to perform scanning and finding out vulnerabilities
MIT License
5.72k stars 923 forks source link

A lot of issues with Subdomain enumeration and widening attack area #339

Closed conorxx1 closed 3 years ago

conorxx1 commented 3 years ago

I guess the subdomain enumeration part lacks a bit and need some enhancements like we need to get 1) ASN to CIDR to IP enum 2) Enumerated IPs to port scan to maybe searchsploit db 3) Js Recon needs work, creates only single file url_extract.txt 4) portscan_passive.txt is always empty 5) Sometimes half of the folders/files are never created. (I have been noticing even if nuclei doesn't find anything files are created while in some cases it's never created ). Same goes with Vulnerability module, sometimes even if it don't found any XSS/CSRF/LFI/CLRF then also files are created(intended behaviour) but sometimes they are not created. 6) subdomain enumeration have some issues for ex for the target lazada.com I got only 1 subdomain total( retried two times) After that i tried manually with subfinder and amass got 600+ subdomains 7) it would be nice if fuzzing results can be stored in a single file separated by subdomain as for 300 subdomains it creates 300 fuzz result file which is quite messy to look at in a vps 8) Telegram notifications not working, tried with slack and they are working for start and end func. 9) webs/urls_by_ext.txt is never created 10) Use FavFreak for FavIconHash

No all my observations are when used with screen, I sometimes lose connection to vps so intent it to run in screen.

conorxx1 commented 3 years ago

Also, please clearfiy the config files like in for slack notifications I have set the webhook URL and I am only able to get start and end notification, clarify which username (bot username or individual username and does channel name needs to be succeeded by #)and token needs to be put in for module based results.

conorxx1 commented 3 years ago

photo1627302940 Again the root domain is returned in the subdomains.txt file, i don't know what's the issue since the last update It's been causing issues with subdomain enumeration a lot. I will dig deeper and try to investigate what's the issue and let you know if I found something. Thanks

six2dez commented 3 years ago

Hi!

Thanks for your suggestions, let me answer over your requests:

  1. I will not provide a feature for ASN as target because I think is really easy to go out of scope, also is a really simple operation get cidr for ASN manually before running reconftw.
  2. I like this one, I will try to add this feature asap.
  3. It depends on your target, for me, it's working now, but it needs some enhancements as said in #330
  4. Again, depends on your target, also you need to provide and set SHODAN api key env var (or set it on cfg file).
  5. The usual behavior is to remove empty files and folders at the end of the scan, if you stop the scan before it ends every folder and file will be there.
  6. Subdomain enumeration returns only ACTIVE subdomains, try to scan vulnweb.com, every tool will return thousands of subdomains, but only 4 or 5 will be really active. If you still want to know passive subdomains extracted, you can check it in .tmp folder.
  7. For me, it is useful and easy to store every fuzzing result one per subdomain, but I can create one more file like fuzzing_full.txt with all the results.
  8. Check the wiki, I'm using Telegram notifications right now.
  9. You're right, fix incoming.
  10. Already using favUp.

Any other suggestion is welcomed :)

conorxx1 commented 3 years ago

Any work around for Js Recon till you push some updates?

conorxx1 commented 3 years ago

@six2dez Hey if you need some reference regarding the Nmap to searchsploit stuff you can check herre at https://github.com/Gr1mmie/autoenum

six2dez commented 3 years ago

Any work around for Js Recon till you push some updates?

Check on the .tmp folder the file called url_extract_tmp.txt, base js files are extracted from there, you can check it easily running:

cat .tmp/url_extract_tmp.txt | grep "yourtargetdomain.com" | grep -Ei "\.(js)"

If this query doesn't return anything the whole JS scan process will not be performed

conorxx1 commented 3 years ago

Thanks it's grepping js files, will need to tweak it into the script, also it takes around 5-6 hours for CRLF/LFI checks and returns no results on the other hand some checks like 403 bypass finishes in 0 seconds.

six2dez commented 3 years ago

Pushed changes to dev branch according to this issue