DataDog / KubeHound

Tool for building Kubernetes attack paths
https://kubehound.io
Apache License 2.0
751 stars 41 forks source link

[CORE] Performance improvements #132

Closed d0g0x01 closed 11 months ago

d0g0x01 commented 11 months ago

This PR implements a number of changes to optimize the creation of the graph and speed up ingest:

  1. use in memory graph backend
  2. tune graph to better optimize for writes
  3. optimize queries used to generate edges
  4. optimize K8s API querying

Total runtime for a cluster of 25k pods is 45mins -> 6mins. Graph creation time 35mins -> 30secs

It also fixes a number of minor bugs around telemetry and logging discovered during the performance testing.

d0g0x01 commented 11 months ago

Nice! Huge improvements!

Looks good to me, left a few notes / improvements that could be beneficial for end users imo.

And a few general things that may be worth checking in another PR:

  1. Can we try changing the setting: DisableCompression bool to false in the k8s client config? I'd expect it to be slower if we put it to true (disabling the compression) because the network latency would increase, but that would be interesting to test? (maybe even as a config 😅 with default values)
  2. According to this profile (one the "3rd minute", since it's a minute per minute, and I imagine that was during the mongo insertion "step") image We spent about half of the CPU time (which i understand, is not the most significant part of the wall time) in garbage collection, and a lot of time copying data to mongo. Is there a way we could pre allocate some of the buffers (and reuse that buffer instead of creating a new one for everything?)
  3. This profile (a bit earlier in the process, the k8s api fetching part) image shows that we Unmarshal() a lot (that would makes sense, we have GB of data coming through there on large cluster). But I don't think we need to process it, do we? We are just forwarding it to mongo as is? (I'm unsure, for example, if item, ok := obj.(*rbacv1.RoleBinding) convertion/type assertion makes an allocation because it's a "complex" type? And we don't really need it because we just forward the raw json to mongo in any case?) I mean: We do K8S json api => parse json => copy to object (via the StoreConverter) => encode bson => send to mongo. Could we (maybe not all case, some have processing steps in there) decode to the wanted type, with annotation for both bson and json and avoid a copy? It looks like we could save like 30-45 seconds maybe (both in GC time and time spent on unmarshalling) there?

This is all good stuff and probably will make a difference. I think since you've identified some of these would be best for you to take this forward. No huge rush as we're more than good enough with the current implementation but nice to have when you can spare some time