blacklanternsecurity / bbot

The recursive internet scanner for hackers. 🧡
https://www.blacklanternsecurity.com/bbot/
GNU General Public License v3.0
7.17k stars 545 forks source link

Disabling certain modules (via config file) is not respected when using -f flag #548

Closed Azathothas closed 1 year ago

Azathothas commented 1 year ago

I have a custom config file where I have enabled only certain modules. Command:

bbot -t target_domains.txt -f subdomain-enum -m httpx --config $HOME/.config/bbot/bbot_subdomains_quick.yml --yes --silent

Config file:

#Working Directory
home: ~/.bbot
#Report Output
scope_report_distance: 1
#Resolve DNS (Used later)
dns_resolution: false
max_threads: 100
max_dns_threads: 100
# Useful for debugging
# http_proxy: null
#Always use a Mac OS address (Less WAF triggers & Blocks)
user_agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36

# Recursively
scope_search_distance: 0
scope_dns_search_distance: 0
dns_resolve_distance: 5

# Infer certain events from others, e.g. IPs from IP ranges, DNS_NAMEs from URLs, etc.
speculate: false
# Passively search event data for URLs, hostnames, emails, etc.
excavate: false
# Summarize activity at the end of a scan
aggregate: false

# HTTP timeout (for Python requests; API calls, etc.)
http_timeout: 10
# HTTP timeout (for httpx)
httpx_timeout: 5
# Custom HTTP headers (e.g. cookies, etc.)
# in the format { "Header-Key": "header_value" }
# These are attached to all in-scope HTTP requests
# Note that some modules (e.g. github) may end up sending these to out-of-scope resources
http_headers: {}
# HTTP retries (for Python requests; API calls, etc.)
http_retries: 1
# HTTP retries (for httpx)
httpx_retries: 1
# Enable/disable debug messages for web requests/responses
http_debug: false
# Maximum number of HTTP redirects to follow
http_max_redirects: 5
# Set the maximum number of HTTP links that can be followed in a row (0 == no spidering allowed)
web_spider_distance: 0
# Set the maximum directory depth for the web spider
web_spider_depth: 1
# Set the maximum number of links that can be followed per page
web_spider_links_per_page: 25
# DNS query timeout
dns_timeout: 5
# How many times to retry DNS queries
dns_retries: 1
# Disable BBOT's smart DNS wildcard handling for select domains
dns_wildcard_ignore: []
# How many sanity checks to make when verifying wildcard DNS
# Increase this value if BBOT's wildcard detection isn't working
dns_wildcard_tests: 5
# Skip DNS requests for a certain domain and rdtype after encountering this many timeouts or SERVFAILs
# This helps prevent faulty DNS servers from hanging up the scan
dns_abort_threshold: 10
# Don't show PTR records containing IP addresses
dns_filter_ptrs: true
# Enable/disable debug messages for dns queries
dns_debug: false
# Whether to verify SSL certificates
ssl_verify: false
# How many scan results to keep before cleaning up the older ones
keep_scans: 100
# Completely ignore URLs with these extensions
url_extension_blacklist:
    # images
    - png
    - jpg
    - bmp
    - ico
    - jpeg
    - gif
    - svg
    # web/fonts
    - css
    - woff
    - woff2
    - ttf
    # audio
    - mp3
    - m4a
    - wav
    - flac
    # video
    - mp4
    - mkv
    - avi
    - wmv
    - mov
    - flv
    - webm
# Distribute URLs with these extensions only to httpx (these are omitted from output)
url_extension_httpx_only:
    - js
# Don't output these types of events (they are still distributed to modules)
omit_event_types:
    - HTTP_RESPONSE
    - URL_UNVERIFIED
    # - DNS_NAME_UNRESOLVED
    # - IP_ADDRESS

# Custom interactsh server settings
interactsh_server: null
interactsh_token: null
interactsh_disable: true

#Enabled Modules
modules:
   anubisdb: {}
   bevigil:
     urls: false
   binaryedge:
     max_records: 1000
   builtwith:
     redirects: true
   censys:
     max_records: 1000
   certspotter: {}
   crobat: {}
   crt: {}
   dnscommonsrv: {}
   dnsdumpster: {}
   fullhunt: {}
   github: {}
   hackertarget: {}
   httpx:
     threads: 100
     in_scope_only: true
     max_response_size: 5242880
   hunterio: {}
   ipneighbor:
     num_bits: 4
   ipstack: {}
   leakix: {}
   otx: {}
   passivetotal: {}
   rapiddns: {}
   riddler: {}
   securitytrails: {}
   shodan_dns: {}
   sublist3r: {}
   threatminer: {}
   urlscan:
     urls: false
   viewdns: {}
   virustotal: {}
   wayback:
     urls: false
     garbage_threshold: 10
   zoomeye:
     max_pages: 20
     include_related: false

output_modules:
   asset_inventory:
     output_file: '~/.bbot/BBOT_Inventory'
     use_previous: true
     summary_netmask: 16
   csv:
     output_file: '~/.bbot/BBOT_Subdomains.csv'
   human:
     output_file: '~/.bbot/BBOT_Subdomains.txt'
     console: true
   json:
     output_file: '~/.bbot/BBOT_Subdomains.json'
     console: false

As you can see:

  1. I have disabled dns_resolution: false
  2. I have not specified massdns anywhere in modules

Expected Behaviour:

but it's running massdns anyway image

Am I doing the config file configuration the wrong way? if so, what would be the setting to disable/enable certain modules ?

TheTechromancer commented 1 year ago

Thanks for your question. The reason you're seeing this behavior is because modules can only be enabled/disabled via the command line. The config is mainly for changing their individual options. (Internal modules, e.g. excavate and speculate, are the exception. They are essential to most scans and are enabled by default, but in edge cases they can be disabled via the config).

Flags are groups of modules. With -f subdomain-enum, you're enabling every module with the subdomain-enum flag. (You can list these with bbot -l -f subdomain-enum).

To disable individual modules, you can specify -em massdns .... You can also exclude by flag, e.g. -ef slow.