fyoorer / ShadowClone

Unleash the power of cloud
Apache License 2.0
672 stars 86 forks source link

Runtime Error with ShadowClone and Nuclei on Large Subdomain List #64

Closed MustafaSky closed 4 days ago

MustafaSky commented 1 week ago

Description: I encountered an issue while running the ShadowClone tool with a large list of subdomains and a Nuclei template. The list contains approximately 2.6 million subdomains, and the Nuclei template sends 4 requests per URL. The commands used, and the errors encountered are detailed below.

Commands Executed:

  1. Initial Command:

    time python shadowclone.py -i activedomains.out -s 300 --no-split /root/ShadowClone/x.yaml -o activedomains-res.out -c "nuclei -duc -l {INPUT} -t {NOSPLIT} -irt 0m20s -timeout 5"
    • Error: runtime function exceeded maximum time of 295.
  2. Updated Runtime Timeout:

    • Set runtime_timeout to 900, but the problem stayed.
  3. Split File Approach:

    time for i in $(ls split-targets-shadowclone/); do 
       python shadowclone.py -i split-targets-shadowclone/$i -s 300 --no-split /root/ShadowClone/x.yaml -o outputslast/max-${i}.out -c "nuclei -duc -l {INPUT} -t {NOSPLIT} -no-httpx -stream -irt 0m20s -timeout 5";
    done
    • Error: could not execute the runtime (sometimes occurs near the final instance like 299/300).

Observations:

Request for Help: I am seeking guidance to understand if there is an error in my approach or if there are any optimization steps I can take to run this process without upgrading memory and incurring high costs.

Environment:

Note: The issue occurs when the list size is large, and splitting the list into smaller chunks still results in the runtime error near the final instances.

image image

fyoorer commented 6 days ago

This one seems to be a limitation of your internet speed. It takes a lot of time to upload thousands of chunks when the input file is so large. I was able to run a file with 2M+ lines and the same nuclei template within 30 minutes with the default 512mb memory and 300s timeout configuration.

As a workaround, you can continue using the split file approach but make sure your chunks are <4mb