praetorian-inc / chariot-ui

Chariot Offensive Security Platform
https://preview.chariot.praetorian.com
MIT License
15 stars 6 forks source link

Remove Duplicate Detections #94

Open Ameston opened 2 weeks ago

Ameston commented 2 weeks ago

Service detection findings are generated for every DNS name that maps to an IP. As a result, a single exposure on a single asset is "double-counted" if it has multiple DNS names pointing to it.

Additionally, risk detections are "duplicated" in the following scenario: A host has an IP address that originates from a CIDR range, and two hostnames (abc.com and www.abc.com) point to the same vHost (they host the same information/present the same certificate/etc.). A risk detection would be created for all 3 assets. We'd like these 3 issues to be consolidated into a single risk.

Why? - "Duplicate" risks complicate metrics (inflating the number of true exposures), clutter triage, and can obscure the true requirements for remediation (the one underlying issue)

noah-tutt-praetorian commented 1 week ago

@praetorian-harry and I spent some time discussing this yesterday, and I had a few takeaways from that discussion that are informing my thoughts on this issue.

First - we've implemented a solution for tagging nuclei templates as detections that create attributes instead of risks. This should cut down on a significant portion of the noise associated with detection templates.

For the latter example, we're getting into an area that's very hard to write firm rules for. For example, take the architecture of old chariot:

Ameston commented 1 week ago

I agree. I think the solution for HTTP / application layer findings is much less straightforward. What about we start with deduplicating service detection findings? I believe it should be much easier to reduce service detection risks into one finding per IP (since these exposures occur at the IP level (and we include ports as references). I know this might imply some schema rework, so I'm curious what your thoughts on this component are.

Ameston commented 1 week ago

A possibility Harry and I discussed was providing a mechanism for issues to be "linked" or otherwise marked as some kind of duplicate so that metrics can be calculated appropriately after a user determines that a particular issue is actually a duplicate

This idea makes the most sense to me for application layer exposures, and I'll run this by the MSP team on Friday.

noah-tutt-praetorian commented 6 days ago

@Ameston After some further discussion, I believe we have all the tools in the application right now to link findings. I'd like to trial this process and see how it feels before making further tweaks, if you're up for it:

This will allow us to record these instances and keep a record of the linked risks, but keep them out of metrics for opened instances. It will also keep us away from tricky consolidation logic.

I see a few possible downsides to this approach:

Ameston commented 6 days ago

This process seems fine to me for application-layer risks. We can write some helper scripts to make that process easier. What about service-level detections (such as RPC, RDP, etc.)? I feel like we can automatically deduplicate those since they are IP/Port-level exposures?

noah-tutt-praetorian commented 6 days ago

I'm still thinking that piece over - I have a way of doing it but not sure it's the right approach yet.

Ameston commented 6 days ago

Roger, the vHost/application-layer solution sounds good to me, and I'll proceed with demo'ing/PoCing that next week