Open babariviere opened 2 months ago
After investigation, it's working with the cli, but not with our implementation with the lib. Closing for now and sorry for the noise.
Ok it happens when this is the only template included even with the cli.
So if I do this: nuclei -target https://geoserver.example.com -t http/cves/2023/CVE-2023-25157.yaml
It will be stuck forever.
But with this: nuclei -target https://geoserver.example.com -it http/cves/2023/CVE-2023-25157.yaml
The default templates will mark the host as failed and skip this template.
I can't reproduce it on my side, so leaving for PD devs
EDIT:
NVM i changed target and reproduce it
So I found what's the issue. When running with debug logs, there is a ton of HTTP query.
Querying the first URL from the template, we get a lot of match for <FeatureType><Name>(.*?)<\/Name><Title>
regex.
So my guess is that this "matrix like" feature is making this take forever to scan.
Since there is no task timeout, and only HTTP timeout, this will never stop until every features {{name}}
and {{column}}
are fetched.
What I would like to mitigate this issue and potential similar future templates is a global per-task timeout but I don't know if this is possible since using the SDK and cancelling the context will not stop running tasks.
To give stats on my target, with a scan unfinished, stopped at 14min:
GET /geoserver/ows?service=WFS&version=1.0.0&request=GetCapabilities
/geoserver/ows?service=WFS&version=1.0.0&request=GetFeature&typeName={{name}}&maxFeatures=50&outputFormat=csv
/geoserver/ows?service=WFS&version=1.0.0&request=GetFeature&typeName={{name}}&CQL_FILTER=strStartswith({{column}},%27%27%27%27)=true
Another idea to mitigate this would be to set a maximum number of requests per template.
@babariviere , firstly yes @timeout
overrides the default timeout set in nuclei ( this override is expected to detect blind sqli etc) . I was not able reproduce this locally using cli , can you share below details
1) are you able to reproduce this in both CLI & SDK or only one of them
2) can you share more debug data (i.e output using -debug
-v
flags)
3) does this happen with that particular target or does it feel like a common issue affecting multiple targets ?
If you believe some info might be sensitive feel free to dm us on discord because we can't investigate/resolve if we are not able to reproduce it
@tarunKoyalwar
For a simple reproduction case:
docker run -it -p8080:8080 docker.osgeo.org/geoserver:2.25.1
nuclei -target http://localhost:8080 -t http/cves/2023/CVE-2023-25157.yaml -debug
Of course it is fast locally (9s since there is no latency), but this will emit 400 HTTP queries, and for our target, they are way more features than on this bare one (as stated above, >4000 requests).
But since this is target dependent, I think the best case is still to have a "global task timeout" which will kill the task if it takes too long.
@babariviere , although i wasn't able to observe nuclei being blocked , i think this is related to iterate-all
, actually we have deprecated iterate-all
feature because it can be ambiguious at some times when more than 2 requests and involved , i suspect that in this specific case there might be recursion , which is causing infinite loop like situation
Issue description:
When using nuclei as a library, and when using CVE-2023-25157 template, our target will make us stuck and no timeout will kill the task.
We are using the default opts (so timeout with 5s) and when running a scan with a global timeout of 2h30min, we are reaching our global timeout. By removing this template, we are reaching a scan time of 23min.
I am using the latest version of nuclei (v3.2.8) and I am wondering if the issue is with the
@timeout
inside the template which overrides default timeout.I can help test this if needed but I cannot disclose my target for NDA reasons.