Closed morph2 closed 8 years ago
OK, I did that. It ran for about a day and a bit, then firefox (on linux) popped up a message about a script not responding on the page. I refreshed and it just hung. I restarted Firefox, and tried to login. After the initial page I clicked on my previous run to show me the scan (still running) and it paused about a minute and said "Sorry something has gone wrong". I restarted Firefox again and there are errors on the scan page now. Attached. Also, the errors found in the scan results - quite a few seem duplicated. It's still running so I can't get a report yet - if you know how to stop the background scan without destroying the results let me know and I'll send you that report as well. The errors file is quite large (3MB). errors.zip
I notice there are about 10K files in /tmp called Arachni_Support_Database_Queue_2663
These get deleted and created, but there are about 10K of them at any one time.
The website is tiny. It's "discovered" about 100 pages and I have not provided any login creds. It seems to take an extraordinary amount of time to run and not yet finish :-(
Mike
I updated the nightlies with a longer time-out period so these errors should now go away.
The files under /tmp/
are data that are being off-loaded to disk, probably browser jobs and page snapshots.
About the site, any chance I can have a look at it myself? You can send the details via e-mail if you prefer.
Btw, let's stick with the CLI instead of the WebUI while we work to resolve this issue, it will provide more feedback during the scan.
Thanks
OK. I can do the CLI, no problem. The site is over a VPN so that won't be possible for you to access it, right now .... Once it's live though... I'd be happy to let you replicate, but am also very willing to assist in debugging now. OOI, for such a "small" site, what period should I be looking at for Arachni to complete? It's not always obvious what is taking so long and why there are so many requests being sent for so few pages (700K so far...).
I'll kill the processes, update the nightlies and run the CLI instead.
BTW - are there any specific cmd line args you want me to use to assist? I am using right now:
arachni --output-verbose --report-save-path=pwd
/xavier.afr http://[my url] >arachni.log 2>&1
Even a small site can generate a lot of workload via dynamic content. For example, if there's a calendar-like system the scan will take forever unless you configure the system to limit scanning redundant pages. Or, the dynamic content can be client-side, like JavaScript creating a large number of DOM states that need to be checked.
This article can help you optimize your configuration.
About CLI args, these are OK for now; let me know if you come across any errors and we'll go from there.
OK, It's hung again with all jobs timing out. The log is huge, 61MB compressed. I also have an arachni error output log file (much smaller). I'd prefer not to post them here. How can I get them to you? I can create a dropbox link but would prefer a non-forum email to send that to as I don't know what is inside these logs...
You can send the error log at tasos.laskos@gmail.com
I think that the errors in the log you sent are fixed in the latest nightlies. Can you give them a try please?
OK. Its been running a day and for some reason Ruby mem size exceeded 3GB, so I had to kill it and increase the memory on the VM. Should it get that large? That sorta surprised me - memory leak?
Also I added an auto-redundant flag to the parameters but I am not quite sure what would be looping. It's not clear to me why its taking so long - I have not provided any login creds for arachni so the pages it can crawl are extremely limited. The verbose output file ... can that tell me what is looping or creating such a large amount of work? There is no picture gallery or calendar available but there is a basket for adding items to purchase.
3GB is a lot, I don't know if it's a leak or just too much data, I'd need to run a few scans to determine that and why the scan is taking a long time.
Closing this since the original issue has been resolved. If anything else comes up please open a new issue.
The following errors popped up.