Closed GoogleCodeExporter closed 8 years ago
For performance reasons, crawl data is stored in memory; this is intentional.
Do you indeed have 5500 files / directories on that server?
Original comment by lcam...@gmail.com
on 21 Mar 2010 at 5:36
After 2 hours I had to stop it (ctrl+c) cause it eat up almost 5GB.
Speaking about number of files:
(root@mainsrv)/home/www#find -type f | wc -l
243956
(root@mainsrv)/home/www#find -type d | wc -l
3070
(root@mainsrv)/home/www#
Original comment by fen...@gmail.com
on 21 Mar 2010 at 6:41
That sounds like a lot, and it's not practical to scan it fully given the
design of of
skipfish (at 50,000 requests per directory with minimal.wl). Only a scanner
that does
not perform comprehensive brute-forcing will be able to cover this much in a
reasonable timeframe.
Consider limiting the scan to interesting areas on the server (using the -I
flag),
excluding tarpit locations using -X, or disabling name.ext bruteforcing with
-Y. You
can also use -s to limit the size of file samples collected.
Original comment by lcam...@gmail.com
on 21 Mar 2010 at 8:28
Original issue reported on code.google.com by
fen...@gmail.com
on 21 Mar 2010 at 5:34