intruxxer / zaproxy

Automatically exported from code.google.com/p/zaproxy
0 stars 0 forks source link

Problem Maintaining Sessions #529

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Scanning the LampSecurity CTF7
2. Login with account (session tracking on)
3. Set login page as session Login page with credentials
4. Right-Click->Attack->Spider Session -> Nothing Happens
5. Right-Click->Attack->Spider Site -> Session Handling Issue

What is the expected output? What do you see instead?
The spidering works more or less, but there are multiple pages that logout the 
user, the spider doesn't login again once logged out so most pages return a 
'please login' message.

Tried :
- Adding a Logged Out regex
- Enabling automatic re-authentication
- Attack->Spider Session after Spider Site, this time it works, detects more 
pages but the Session Logout issue stays.

Being able to set a regex for Logout pages instead of just one logout page 
might help?

For some reason the spider also tries sending POST requests to some pages with 
the POST data :
username=ZAP&password=ZAP

What version of the product are you using? On what operating system?
Version 2.0 on Backtrack 5R3

Please provide any additional information below.

Original issue reported on code.google.com by thec4kei...@gmail.com on 20 Feb 2013 at 9:53

GoogleCodeExporter commented 9 years ago
Just checked the POST request issue, turns out it is "normal" since the login 
form is present on each page once logged out.

Tried adding the login details to the Options->Authentication, the Spider still 
sends credentials ZAP/ZAP

Original comment by thec4kei...@gmail.com on 20 Feb 2013 at 10:41

GoogleCodeExporter commented 9 years ago
OK, there are various things going on here, some of which are bugs.
The re-authentication only works for the active scanning right now: Issue 490 
which is fixed in the latest weekly release.
The Logged out page is just to allow ZAP to switch sessions for you.
To prevent the Spider and Active scanner from hitting logout pages then add 
them to the exclusions - in the UI rightclick the relevant URLs and 'Exclude 
from->Spider' and 'Exclude from->Scanner' - you can have any number of these. 
You can also set them via the API.
The Spider will just supply 'ZAP' as a value - this is normal behaviour and not 
an attempt to login.

Does that help?

Original comment by psii...@gmail.com on 20 Feb 2013 at 11:24

GoogleCodeExporter commented 9 years ago
Thank you that helped me understand the way ZAP works a lot better, I tried 
spidering again after excluding all logout pages from the spider, the Session 
problem remains.

The spider doesn't seem to be using the current Session. I have checked the 
responses and none include a Set-Cookie flag, still the Session Cookie sent in 
the requests is different from the one that was used before I run the Spider.

Original comment by thec4kei...@gmail.com on 20 Feb 2013 at 12:45

GoogleCodeExporter commented 9 years ago
Are you doing this via the UI?
If so, before Spidering select the 'Http Session' tab.
There will (hopefully) be one session - right click it and set it as the 
default session.
The spider should do this by default - I think theres a bug for this, I'll 
check.
The API has been updated so you can set the session as well, but that might be 
post 2.0.0.

Original comment by psii...@gmail.com on 20 Feb 2013 at 12:53

GoogleCodeExporter commented 9 years ago
Thank you !

This was the problem, I had loads of different sessions in there. Cleaning this 
up and setting one single session as default did the trick !

So I finally did the scan, there are two issues now :
A - Attack->Scan Single Page
The scan only displays on line, with the url and a "/" at the end of it... (In 
my case resulting to in 404)

B - Attack->Scan Site
The scan runs for a while, then when I take a look at the list 90% of the 
scanned pages are in fact ONE single page, most of the other pages in the 
website are not even in the list...

Original comment by thec4kei...@gmail.com on 20 Feb 2013 at 2:19

GoogleCodeExporter commented 9 years ago
Getting there slowly :)

When scanning a singe page, are you selecting a node or a leaf in the tree?
And if its a node (ie has children) is there also a leaf with the same name?
This can happen if, for example, theres pages like:
http://www.example.com/myapp
http://www.example.com/myapp/home
There will be 2 nodes in the Sites tree under http://www.example.com/
"GET:myapp" and "myapp" (which will have children)
Scanning the "myapp" node as a single page will probably fail as you've 
described, as its really a dummy node.
So make sure you're scanning the page you want to scan.
Note that you may also have a POST:myapp page too which you might want to scan 
instead?
It might also be worth trying to scan the subtree as well.
Lets see if those suggestions help and if not we can look at the problems with 
scanning the site.

Original comment by psii...@gmail.com on 20 Feb 2013 at 2:32

GoogleCodeExporter commented 9 years ago
Unfortunately these are not nodes. Here is an example :
GET:news
GET:news&id=1

these result in the scanner in:
http://www.example.com/news/
and 
http://www.example.com/news&id=1/

Original comment by thec4kei...@gmail.com on 20 Feb 2013 at 2:45

GoogleCodeExporter commented 9 years ago
Since this issue is entirely different from the original subject I will submit 
a separate Issue Thread.
I have tried reproducing the issue with a new 'clean' scan and the results were 
the same.

Original comment by thec4kei...@gmail.com on 21 Feb 2013 at 3:58

GoogleCodeExporter commented 9 years ago
As per previous comments.

Original comment by THC...@gmail.com on 8 Apr 2013 at 3:59

GoogleCodeExporter commented 9 years ago
how to scan login based pages?

Original comment by s.sindhu...@gmail.com on 5 Jun 2014 at 4:47

GoogleCodeExporter commented 9 years ago
Please ask questions like that on the user group: 
http://groups.google.com/group/zaproxy-users rather than on closed issues.

Original comment by psii...@gmail.com on 7 Jun 2014 at 12:56