Open basedygt opened 4 months ago
I think they use Selenium because it's like using a real browser. Reqests only get the static page so no javascript execution. When using Selenium, if there is javascript being executed, you will get an alert box or an iframe. Reqests are quicker but how will you get the dynamic part of the page. If javascript alters the page in any way, like adding data from a AJAX request like your XSS, reqests won't wait for that it will only get the raw HTML that is loaded before the javascpit and potentially give a false negative. But thats just my theory of why they use Selenium and not requsts. Its also easier to get the HTML tags using the find_element_by_tag_name
. In requsts the return is a string and to get the proper tag you would either have to use your own search engine which would be slow or inefficient or using an additional library like BeautifulSoup.
Is your feature request related to a problem? Please describe. Injecting payloads in URLs paths is very slow as it uses selenium by default
Describe the solution you'd like Adding an additional argument which allows user to inject the payload with requests or other light weight tool should solve this issue
Additional context
For scanning a URL which reflects the value from it's path say
https://example.com/file/payload
results infile payload not found
can be exploited withhttps://example.com/file/<script>alert(1)</script>
Using xsstrike we can automate it with below command:
However it uses
selenium
to test payloads which is slow as compared torequests