Open redshiftzero opened 7 years ago
From https://github.com/freedomofpress/securedrop/blob/develop/install_files/securedrop-ossec-server/var/ossec/etc/ossec.conf#L9 it is not obvious which directory could create excessive noise (/etc,/usr/bin,/usr/sbin,/bin,/sbin,/var/ossec). Is there a way to reproduce the problem in the staging environment somehow ?
Oh, the agent has more : https://github.com/freedomofpress/securedrop/blob/develop/install_files/securedrop-ossec-agent/var/ossec/etc/ossec.conf#L9 :-) /var/lib/securedrop is the one being too frequently reported ?
We can reproduce in the staging environment by provisioning staging with prod-like secrets in site-specific
so we can get OSSEC emails. Having a test instance or leaving staging running for a week or so is so useful for figuring out what alerts need to be suppressed.
Relevant to this ticket: http://ossec-docs.readthedocs.io/en/latest/faq/syscheck.html#how-do-i-stop-syscheck-alerts-during-system-updates
As to the syscheck nightly unattended upgrade/cron-apt and its noisy alerts, my first inclination would be to connect this to #2140 and point out that the addition of ELK gives you more filtering of the alerts and ability to parse the alert into fields, so you could write a Logstash filter to drop/discard binary hash sum changes that appear from legit updates (if there's any way to correlate that, either by the contents of the alert message or the time it's expected to change) ... I'd have to check how the Logstash confs are chained/numbered though and whether any custom written filters would persist if Logstash/Wazuh were updated -- btw, the Wazuh manager probably has same email capability as OSSEC manager, so that kind of filtering only works for the alerts dashboard.
With the noisy e-mails, one idea is to write a Postfix content filter that applies to outgoing SMTP to drop certain stuff. OSSEC's FAQ admits this is a hard problem, but the way they suggest addressing it is by stopping syscheck process during upgrades (I assume realtime is enabled rather than running on a schedule?) and then clearing its database after to establish a new baseline. I think this is the most reasonable solution, although it leaves short vulnerable window during the upgrade that a live attacker on the system could potentially take advantage of, which is why I favor adding even more security tools through something like auditd and logging all activity, which shouldn't necessarily be actively examined, it's only there just in case something suspicious happens and a forensic investigation is needed to go back in time after-the-fact.
The solution of stopping syscheck right before upgrades run and resetting the baseline database afterward is something that can be accomplished soon.
It's unclear how well FPF is positioned to help admins with incident response and recognizing what's suspicious, it's a skill that must be addressed during initial training. I accept you have admins with varying backgrounds, and I'm not sure how well the Redmine support channel has been working out, but I think more visibility into deployments is probably a good idea (see #973) just as long as it's overwhelmingly clear that logs which have the potential to reveal source material or metadata or anything of the like are out of scope.
Stopping syscheck
prior to upgrade and establishing a new baseline post upgrade is not ideal - however, these alerts during upgrade create a massive amount of spam and make the OSSEC alerts almost useless in their current incarnation, so I agree that we should implement this as a stopgap until we have a better solution (we can re-evaluate when we move to other monitoring tools).
Alright, sounds good. Strange, they seem to have removed https://ossec-multitudedocs.readthedocs.io/en/latest/manual/syscheck/#how-do-i-stop-syscheck-alerts-during-system-updates from the OSSEC FAQ. I wonder why...
Over in #2155, @micahflee said:
I think we should remove the Daily report: File Changes email as well. It gets sent every day, and since other file changes get sent in other emails, it's redundant.
That's a pretty good point. @freddymartinez9 added:
do you think we should suppress all emails for file changes except the
Daily report: File Changes
since I think that one is more user friendly.
Worth considering! Neither address the problem of concretely defining appropriate Admin responses in the event of file changes: were they unattended upgrades? Aren't those files supposed to change due to upgrades? Still, let's do what we can do make the notifications informative and less exhausting to wade through.
Currently our file integrity monitoring (
Integrity checksum changed for 'blah'
) is producing a lot of alarms sent to admins. This produces quite a bit of noise for them to wade through, contributing to the alarm fatigue problem, as it isn't clear which of the integrity checksum changes are indicators of compromise. We should write rules to more carefully select which files we alarm on with file integrity monitoring.