Closed arminus closed 1 month ago
@arminus: Thanks for opening an issue, it is currently awaiting triage.
In the meantime, you can:
So here is the parsed details
So it is being poured to a bucket what scenario is triggering on apache, the warning you are seeing is because there was not authentication header sent by the client.
Here are the filter around 40X response codes
$ grep filter /etc/crowdsec/scenarios/http-generic-bf.yaml
filter: "evt.Meta.service == 'http' && evt.Meta.sub_type == 'auth_fail'"
filter: "evt.Meta.log_type == 'http_access-log' && evt.Parsed.verb == 'POST' && evt.Meta.http_status == '401'"
filter: "evt.Meta.log_type == 'http_access-log' && evt.Parsed.verb == 'POST' && evt.Meta.http_status == '403'"
Okay looks like the "auth_fail" for Caddy will be more complicated, I guess what we should do is look at the respone headers and also check if www-authenticate
was requested
Doing some testing
{"level":"error","ts":1711741798.0391326,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"127.0.0.1","remote_port":"46944","client_ip":"127.0.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:9080","uri":"/","headers":{"Authorization":[],"User-Agent":["curl/7.88.1"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":0.782670468,"size":0,"status":401,"resp_headers":{"Server":["Caddy"],"Www-Authenticate":["Basic realm=\"restricted\""]}}
{"level":"error","ts":1711741827.9626286,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"127.0.0.1","remote_port":"54462","client_ip":"127.0.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:9080","uri":"/","headers":{"User-Agent":["curl/7.88.1"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":0.000033987,"size":0,"status":401,"resp_headers":{"Server":["Caddy"],"Www-Authenticate":["Basic realm=\"restricted\""]}}
{"level":"info","ts":1711741864.947103,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"127.0.0.1","remote_port":"43498","client_ip":"127.0.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:9080","uri":"/","headers":{"Authorization":[],"User-Agent":["curl/7.88.1"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"Bob","duration":0.794682124,"size":18630,"status":200,"resp_headers":{"Content-Type":["text/html; charset=utf-8"],"Last-Modified":["Fri, 08 Dec 2023 00:28:15 GMT"],"Accept-Ranges":["bytes"],"Content-Length":["18630"],"Server":["Caddy"],"Etag":["\"s5bnz3edi\""]}}
First log is invalid credentials, Second is empty authentication and third is successful
Can you try updating the caddy
collection to see if this is now properly handled?
cscli hub update && cscli hub upgrade
Thanks for the quick help!
I just ran the update, restarted the crowdsec container, then ran nikto again from a remote box, now the error is different:
time="2024-04-02T13:40:37Z" level=error msg="unable to collect sources from bucket: while extracting scope from bucket crowdsecurity/http-generic-bf: scope is Ip but Meta[source_ip] doesn't exist"
Thanks for the quick help!
I just ran the update, restarted the crowdsec container, then ran nikto again from a remote box, now the error is different:
time="2024-04-02T13:40:37Z" level=error msg="unable to collect sources from bucket: while extracting scope from bucket crowdsecurity/http-generic-bf: scope is Ip but Meta[source_ip] doesn't exist"
Also ensure your caddy installation is up to date to the github releases as using an old version may cause this error
Ok, going to caddy 2.8 fixed the problem.
time="2024-04-02T14:04:55Z" level=info msg="(localhost/crowdsec) crowdsecurity/http-generic-bf by ip x.x.x.x (AT/197540) : 4h ban on Ip x.x.x.x"
time="2024-04-02T14:04:55Z" level=info msg="127.0.0.1 - [Tue, 02 Apr 2024 14:04:55 UTC] \"POST /v1/alerts HTTP/1.1 201 22.47711ms \"crowdsec/v1.6.0-4192af30\" \""
time="2024-04-02T14:04:59Z" level=info msg="Signal push: 1 signals to push"
There's one more, though I think:
time="2024-04-02T14:05:04Z" level=warning msg="failed to run RunTimeValue : invalid operation: int(<nil>) (1:1)\n | int(evt.Unmarshaled.caddy.status)\n | ^" id=red-sunset name=crowdsecurity/caddy-logs stage=s01-parse
Ok, going to caddy 2.8 fixed the problem.
time="2024-04-02T14:04:55Z" level=info msg="(localhost/crowdsec) crowdsecurity/http-generic-bf by ip x.x.x.x (AT/197540) : 4h ban on Ip x.x.x.x" time="2024-04-02T14:04:55Z" level=info msg="127.0.0.1 - [Tue, 02 Apr 2024 14:04:55 UTC] \"POST /v1/alerts HTTP/1.1 201 22.47711ms \"crowdsec/v1.6.0-4192af30\" \"" time="2024-04-02T14:04:59Z" level=info msg="Signal push: 1 signals to push"
There's one more, though I think:
time="2024-04-02T14:05:04Z" level=warning msg="failed to run RunTimeValue : invalid operation: int(<nil>) (1:1)\n | int(evt.Unmarshaled.caddy.status)\n | ^" id=red-sunset name=crowdsecurity/caddy-logs stage=s01-parse
Do you have the caddy logs, be surprised if there was no status response?
I have the full caddy log, but it's kind of hard to dig the relevant line out of it since the timestamps there are in ms vs the zulu times in the crowdsec log. I'll try later.
I have the full caddy log, but it's kind of hard to dig the relevant line out of it since the timestamps there are in ms vs the zulu times in the crowdsec log. I'll try later.
havent tried it but jq?
jq '. | select( .status | . == null or . == "")' < /path/to/caddy/log
It depends if you have other things logging to the file such as debug logs for plugins they will also be attempted to be parsed
Ok, that produces a bunch of lines with this pattern (primarily for gitea and jira for which caddy is a proxy here):
{
"level": "error",
"ts": 1712066552.3486984,
"logger": "http.handlers.reverse_proxy",
"msg": "aborting with incomplete response",
"upstream": "172.19.0.8:3000",
"duration": 0.00405423,
"request": {
"remote_ip": "x.x.x.x",
"remote_port": "56973",
"client_ip": "x.x.x.x",
"proto": "HTTP/2.0",
"method": "GET",
"host": "git.mydomain.com",
"uri": "/user/events",
"headers": {
"Cache-Control": [
"no-cache"
],
"Cookie": [],
"Sec-Fetch-Dest": [
"empty"
],
"Te": [
"trailers"
],
"Pragma": [
"no-cache"
],
"X-Forwarded-Proto": [
"https"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"
],
"X-Forwarded-Host": [
"git.mydomain.com"
],
"Accept-Encoding": [
"gzip, deflate, br"
],
"Sec-Fetch-Mode": [
"cors"
],
"Accept": [
"text/event-stream"
],
"Accept-Language": [
"de,en-US;q=0.7,en;q=0.3"
],
"Sec-Fetch-Site": [
"same-origin"
],
"X-Forwarded-For": [
"x.x.x.x"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4865,
"proto": "h2",
"server_name": "git.mydomain.com"
}
},
"error": "reading: context canceled"
}
Ok, that produces a bunch of lines with this pattern (primarily for gitea and jira for which caddy is a proxy here):
{ "level": "error", "ts": 1712066552.3486984, "logger": "http.handlers.reverse_proxy", "msg": "aborting with incomplete response", "upstream": "172.19.0.8:3000", "duration": 0.00405423, "request": { "remote_ip": "x.x.x.x", "remote_port": "56973", "client_ip": "x.x.x.x", "proto": "HTTP/2.0", "method": "GET", "host": "git.mydomain.com", "uri": "/user/events", "headers": { "Cache-Control": [ "no-cache" ], "Cookie": [], "Sec-Fetch-Dest": [ "empty" ], "Te": [ "trailers" ], "Pragma": [ "no-cache" ], "X-Forwarded-Proto": [ "https" ], "User-Agent": [ "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0" ], "X-Forwarded-Host": [ "git.mydomain.com" ], "Accept-Encoding": [ "gzip, deflate, br" ], "Sec-Fetch-Mode": [ "cors" ], "Accept": [ "text/event-stream" ], "Accept-Language": [ "de,en-US;q=0.7,en;q=0.3" ], "Sec-Fetch-Site": [ "same-origin" ], "X-Forwarded-For": [ "x.x.x.x" ] }, "tls": { "resumed": false, "version": 772, "cipher_suite": 4865, "proto": "h2", "server_name": "git.mydomain.com" } }, "error": "reading: context canceled" }
Makes sense context is canceled so there is no status code, hmmm let me think about this
hello, I have the same problem, with Caddy 2.8.4 and Crowdsec 1.6.2, I don't manage to parse the logs. I've tried with a file, with docker container, but logs set to info, error and debug and I have the same error.
I've tried with a file, with docker container, but logs set to info, error and debug and I have the same error.
The error happens when the upstream service is not responding so there is no status code, this shouldnt be on every log unless you have a wider issue.
You are right, I must have messed up something yesterday. I've started from scratch and it works fine it the status code is returned :)
Closing issue due to some resolutions, however, the warning due to reverse proxies errors still exists an will be tracked as this issue within the appropriate repository from here on.
What happened?
I'm trying to test with nikto if my crowdsec setup for caddy logs works. Basically, that test produced a bunch of caddy log entries like this:
(remote IP anonymized here only)
crowdesc then produces this warning
(the timestamp might be slightly off, I'm not sure I picked the right warning line from the list of hundreds)
When I run the same nikto test against another box which crowdsec and apache logs, I get blocked, not with the caddy logs though, which is kind of in line with the warning that it cannot fetch headers - and so can't recognize the IP for the 401 ?
What did you expect to happen?
crowdsec to fully parse the caddy log line with the 401 errors
How can we reproduce it (as minimally and precisely as possible)?
acquis.yaml:
Anything else we need to know?
No response
Crowdsec version
OS version
Enabled collections and parsers
Acquisition config
Config show
Prometheus metrics
Related custom configs versions (if applicable) : notification plugins, custom scenarios, parsers etc.