Starting from revision 57e5904ba4 spider is running multiple low-level scanners
simultaneously and applies database updates in single-threaded mode, so
increasing overall scanning speed will not require to overload database with
too many simultaneously updating hosts. But also this leads to inaccurate
subprocess termination detection. While all the scanners are running, spider
checks them each 5 seconds (configurable value), but it can't check them during
database updates. This is the first problem.
Another problem is inability to detect too many lines or hunged scanners and
terminate subprocess, because all the output is passed to the temporary files.
Possible solutions are:
* start and monitor subprocesses from different python threads (what with
python global lock?), or use single monitoring thread for all subprocesses
* use subprocess wrapper - additional program which will monitor scanners and
do the initial processing of the output data
* same as previous, but use single monitoring process for all subprocesses
Due to some problems with multithreading in python, the first solution is
questionable. The third probably requires complicated interface between starter
and updater processes, and the second is most simple, but increases the total
number of processes (so, subprocess wrapper should be as small as possible).
Original issue reported on code.google.com by radist...@gmail.com on 23 Oct 2010 at 1:19
Original issue reported on code.google.com by
radist...@gmail.com
on 23 Oct 2010 at 1:19