CrowdSec - the open-source and participative security solution offering crowdsourced protection against malicious IPs and access to the most advanced real-world CTI.
When a user is using a notification that may exceed the timeout time, if too many requests are made to the channel the channel will become blocking meaning alerts and decisions are not processed in a timely manner.
What did you expect to happen?
Notification channel should become non blocking and handle the queue of alerts outside the of alert http handler. We should never block this http handler as ultimately it will slow down detection and remediations.
How can we reproduce it (as minimally and precisely as possible)?
Now this is difficult to replicate, however, you can spin up a webserver which has a route that exceeds the default timeout. Then generate alot of events, I will try to make a docker compose example and attach to this issue.
Anything else we need to know?
No response
Crowdsec version
```console
$ cscli version
# paste output here
```
OS version
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
Enabled collections and parsers
```console
$ cscli hub list -o raw
# paste output here
```
Acquisition config
```console
# On Linux:
$ cat /etc/crowdsec/acquis.yaml /etc/crowdsec/acquis.d/*
# paste output here
# On Windows:
C:\> Get-Content C:\ProgramData\CrowdSec\config\acquis.yaml
# paste output here
Config show
```console
$ cscli config show
# paste output here
```
Prometheus metrics
```console
$ cscli metrics
# paste output here
```
Related custom configs versions (if applicable) : notification plugins, custom scenarios, parsers etc.
Check Releases to make sure your agent is on the latest version.
Details
I am a bot created to help the [crowdsecurity](https://github.com/crowdsecurity) developers manage community feedback and contributions. You can check out my [manifest file](https://github.com/crowdsecurity/crowdsec/blob/master/.github/governance.yml) to understand my behavior and what I can do. If you want to use this for your project, you can check out the [BirthdayResearch/oss-governance-bot](https://github.com/BirthdayResearch/oss-governance-bot) repository.
What happened?
When a user is using a notification that may exceed the timeout time, if too many requests are made to the channel the channel will become blocking meaning alerts and decisions are not processed in a timely manner.
What did you expect to happen?
Notification channel should become non blocking and handle the queue of alerts outside the of alert http handler. We should never block this http handler as ultimately it will slow down detection and remediations.
How can we reproduce it (as minimally and precisely as possible)?
Now this is difficult to replicate, however, you can spin up a webserver which has a route that exceeds the default timeout. Then generate alot of events, I will try to make a docker compose example and attach to this issue.
Anything else we need to know?
No response
Crowdsec version
OS version
Enabled collections and parsers
Acquisition config
Config show
Prometheus metrics
Related custom configs versions (if applicable) : notification plugins, custom scenarios, parsers etc.