Open GoogleCodeExporter opened 9 years ago
Not easily; you can monitor progress by examining the log file in debug builds,
or checking crawl status by hitting 'space'. I'm thinking of how to make this
better.
Original comment by lcam...@gmail.com
on 7 Jun 2011 at 4:38
Definitely needed.
Original comment by p...@p1sec.com
on 7 Feb 2012 at 10:02
+1
There's no reason why the program wouldn't be able to dump the data as it
generates it. Don't confuse the presentation with this - given that data is the
only variable, I truly hope that the presentation code (html, css, js) is the
same - if/when this is the case, then even if the program is interrupted, some
reporting should be available.
I'm sorry if it's not my place to suggest this, but I would:
1. Refactor the code to have presentation files always the same, and not even
in the smallest detail relative to a scan session
2. Have data dumped to files as it's collected, even if this means gazillion
little data files; the best would, of course, be one/few files with known
names, so that presentation code can work (e.g. js alone cannot find out how
many data files there may be)
3. Given that presentation is different from data files at this point, and if
the current presentation model is to stay, then dump presentation files at the
very beginning of scan session; that way, they'll always be there, versus the
program not having a chance to create them if that task was left for the end
4. At this point, even if the process is killed, since some data would be
collected, reports would be available. Also, this automatically brings another
useful feature: report page can be refreshed as the scan is taking place, and
new data would be presented immediately. That said, data files shouldn't be
locked for read access while they are being written to.
I'm sure you can pull this, if you wanted.
Original comment by hlubo...@gmail.com
on 4 Mar 2012 at 7:13
The "presentation files" (i.e., the viewer for raw data) are very much static
HTML and JS, and they don't change across scans. I suggest having a look at the
report structure :-)
There are reasons why the program can't dump the data right now (chiefly
because there are important postprocessing steps that can't be carried out
until reasonably complete). This can be changed, but isn't trivial. It's on our
radar, and we know how to do it - it's just not the top priority right now.
Original comment by lcam...@gmail.com
on 4 Mar 2012 at 7:17
Well, then it should be priority to you to add some way to interrupt the scan
and have it exit gracefully. Ctrl+C doesn't function, and when scans that I
attempted passed 1 hour, I started to have the feeling that it went too far.
The only way I could interrupt it was to kill the process, but that doesn't
create any files. So, in that case, I wasted an hour on nothing.
It is good that "presentation files" are the static. That makes it even easier
to create them at the time the folder is created - even though that wouldn't
make a difference before you find the time to refactor the process to dump data
files as it goes. BTW, why is the output folder created up front, since the
report is created at the end? All tells me that it'd be way better if you
bumped this up the priority list. This process IS about reporting, isn't it?
Nobody but you will be interested in staring at the progress numbers: first,
it's boring, and it tells me very little except that the program is doing
something. I'm taking from the perspective of the user who's interested in
higher level process (report), versus bits and internals of what's creating it
(again, the report).
Original comment by hlubo...@gmail.com
on 4 Mar 2012 at 7:26
Ctrl-C works in all reasonable settings; if it doesn't work for you, that's
probably a bug unrelated to skipfish (Cygwin is known for having some problems
with Ctrl-C, and there are workarounds, but in a vast majority of cases, it
works as-is).
You can certainly kill the process and have it write a report. Use killall -2
skipfish.
> BTW, why is the output folder created up front, since the report is created
at the end?
So that if it can't be created, we exit immediately without wasting few hours
only to realize we can't write the report at all.
Look, we know about this feature request, know how to implement it, and will
probably do so. As noted, you can abort scans and see their output (in a couple
of ways), so this is not an immediate priority, but it's certainly good to
have. If it's a deal-breaker for you, then for time being, you may prefer to
stick to other open-source or commercial tools (and watch this bug).
Original comment by lcam...@gmail.com
on 4 Mar 2012 at 7:33
Ctrl-C has always worked for me. Usually scary moment when I press it after
many hours of running but hasn't failed yet.
Would still like the feature but happy to wait till it can be done properly.
Original comment by ro...@digininja.org
on 4 Mar 2012 at 9:45
Per 2.06b we have limited runtime reporting. You can run a scan like:
$ ./skipfish -o dir -vv http://www.example.org 2> runtime.log
And do:
$ tail -f runtime.log
Or alternatively, if you don't like the statistics screen, you can do:
$ ./skipfish -o dir -vv -u http://www.example.org
It's not the same as interim report writing but it should help.
Original comment by niels.he...@gmail.com
on 13 May 2012 at 10:49
Original issue reported on code.google.com by
dni...@gmail.com
on 7 Jun 2011 at 1:56