kubearmor / KubeArmor

Runtime Security Enforcement System. Workload hardening/sandboxing and implementing least-permissive policies made easy leveraging LSMs (BPF-LSM, AppArmor).
https://kubearmor.io/
Apache License 2.0
1.49k stars 342 forks source link

Detecting Risks and Sensitive Assets Access #1156

Open PrimalPimmy opened 1 year ago

PrimalPimmy commented 1 year ago

Feature Request

Short Description

Kubearmor can be leveraged to show Sensitive Assets Access. For example, Volume Mount points which may be accessed by other processes we may not know about. An example table would be like:

Sensitive Assets Access:
| Name                  | Mount Path                                    | Accessed By  | Last Accessed   | Verdict |
| kube-api-access-f8sqr | /var/run/secrets/kubernetes.io/serviceaccount | /bin/vault      | 1 day ago     | Allow   |
| kube-api-access-f8sqr | /var/run/secrets/kubernetes.io/serviceaccount | /bin/xyz         | 10 mins ago | Allow   |
| kube-api-access-f8sqr | /var/run/secrets/kubernetes.io/serviceaccount | /bin/unknown | 1 mins ago    | Deny    |

Is your feature request related to a problem? Please describe the use case.

It can be a great and important feature if this is implemented. I have already implemented this by checking Service Account tokens is mounted or not, but this can be done with other important volume mounts as well.

Describe the solution you'd like

Use client-go to extract the necessary information from the Kubernetes-manifest, use discovery-engine to check all possible sensitive assets being accessed.

DOC in Progress

nyrahul commented 1 year ago

Reference requirements document.

Requirements doc for Sensitive Asset access.pdf

Vyom-Yadav commented 1 year ago

@nyrahul We need to add more information to Sensitive Assets Access, my suggestion is a new column name volume type.

Name Mount Path Accessed By Last Accessed Verdict Volume Type
... ... ... ... ... hostPath
... /var/run/secrets/kubernetes.io/serviceaccount ... ... ... projected
... /var/run/secrets/kubernetes.io/serviceaccount/ca.crt ... ... ... projected.sources.configMap
... /var/run/secrets/kubernetes.io/serviceaccount/token ... ... ... projected.sources.serviceAccountToken

This will:

  1. Give users crucial information as to which volume type is being accessed. This is not difficult to achieve and can be done by iterating over the pod manifest.
  2. Give the API consumer (like the admission controller) a way to create better policies. It is hard to differentiate between /var/run/secrets/kubernetes.io/serviceaccount and /var/run/foo from logical point to view. Service account tokens are mounted by default at that path but we shouldn't hardcode it and proceed in a more generic way.

The list of volume types is finite.

type VolumeSource struct {
    // HostPath represents a pre-existing file or directory on the host
    // machine that is directly exposed to the container. This is generally
    // used for system agents or other privileged things that are allowed
    // to see the host machine. Most containers will NOT need this.
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
    // ---
    // TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not
    // mount host directories as read/write.
    // +optional
    HostPath *HostPathVolumeSource `json:"hostPath,omitempty" protobuf:"bytes,1,opt,name=hostPath"`
    // EmptyDir represents a temporary directory that shares a pod's lifetime.
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
    // +optional
    EmptyDir *EmptyDirVolumeSource `json:"emptyDir,omitempty" protobuf:"bytes,2,opt,name=emptyDir"`
    // GCEPersistentDisk represents a GCE Disk resource that is attached to a
    // kubelet's host machine and then exposed to the pod.
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
    // +optional
    GCEPersistentDisk *GCEPersistentDiskVolumeSource `json:"gcePersistentDisk,omitempty" protobuf:"bytes,3,opt,name=gcePersistentDisk"`
    // AWSElasticBlockStore represents an AWS Disk resource that is attached to a
    // kubelet's host machine and then exposed to the pod.
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
    // +optional
    AWSElasticBlockStore *AWSElasticBlockStoreVolumeSource `json:"awsElasticBlockStore,omitempty" protobuf:"bytes,4,opt,name=awsElasticBlockStore"`
    // GitRepo represents a git repository at a particular revision.
    // DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an
    // EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir
    // into the Pod's container.
    // +optional
    GitRepo *GitRepoVolumeSource `json:"gitRepo,omitempty" protobuf:"bytes,5,opt,name=gitRepo"`
    // Secret represents a secret that should populate this volume.
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
    // +optional
    Secret *SecretVolumeSource `json:"secret,omitempty" protobuf:"bytes,6,opt,name=secret"`
    // NFS represents an NFS mount on the host that shares a pod's lifetime
    // More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
    // +optional
    NFS *NFSVolumeSource `json:"nfs,omitempty" protobuf:"bytes,7,opt,name=nfs"`
    // ISCSI represents an ISCSI Disk resource that is attached to a
    // kubelet's host machine and then exposed to the pod.
    // More info: https://examples.k8s.io/volumes/iscsi/README.md
    // +optional
    ISCSI *ISCSIVolumeSource `json:"iscsi,omitempty" protobuf:"bytes,8,opt,name=iscsi"`
    // Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime.
    // More info: https://examples.k8s.io/volumes/glusterfs/README.md
    // +optional
    Glusterfs *GlusterfsVolumeSource `json:"glusterfs,omitempty" protobuf:"bytes,9,opt,name=glusterfs"`
    // PersistentVolumeClaimVolumeSource represents a reference to a
    // PersistentVolumeClaim in the same namespace.
    // More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
    // +optional
    PersistentVolumeClaim *PersistentVolumeClaimVolumeSource `json:"persistentVolumeClaim,omitempty" protobuf:"bytes,10,opt,name=persistentVolumeClaim"`
    // RBD represents a Rados Block Device mount on the host that shares a pod's lifetime.
    // More info: https://examples.k8s.io/volumes/rbd/README.md
    // +optional
    RBD *RBDVolumeSource `json:"rbd,omitempty" protobuf:"bytes,11,opt,name=rbd"`
    // FlexVolume represents a generic volume resource that is
    // provisioned/attached using an exec based plugin.
    // +optional
    FlexVolume *FlexVolumeSource `json:"flexVolume,omitempty" protobuf:"bytes,12,opt,name=flexVolume"`
    // Cinder represents a cinder volume attached and mounted on kubelets host machine.
    // More info: https://examples.k8s.io/mysql-cinder-pd/README.md
    // +optional
    Cinder *CinderVolumeSource `json:"cinder,omitempty" protobuf:"bytes,13,opt,name=cinder"`
    // CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
    // +optional
    CephFS *CephFSVolumeSource `json:"cephfs,omitempty" protobuf:"bytes,14,opt,name=cephfs"`
    // Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running
    // +optional
    Flocker *FlockerVolumeSource `json:"flocker,omitempty" protobuf:"bytes,15,opt,name=flocker"`
    // DownwardAPI represents downward API about the pod that should populate this volume
    // +optional
    DownwardAPI *DownwardAPIVolumeSource `json:"downwardAPI,omitempty" protobuf:"bytes,16,opt,name=downwardAPI"`
    // FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod.
    // +optional
    FC *FCVolumeSource `json:"fc,omitempty" protobuf:"bytes,17,opt,name=fc"`
    // AzureFile represents an Azure File Service mount on the host and bind mount to the pod.
    // +optional
    AzureFile *AzureFileVolumeSource `json:"azureFile,omitempty" protobuf:"bytes,18,opt,name=azureFile"`
    // ConfigMap represents a configMap that should populate this volume
    // +optional
    ConfigMap *ConfigMapVolumeSource `json:"configMap,omitempty" protobuf:"bytes,19,opt,name=configMap"`
    // VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine
    // +optional
    VsphereVolume *VsphereVirtualDiskVolumeSource `json:"vsphereVolume,omitempty" protobuf:"bytes,20,opt,name=vsphereVolume"`
    // Quobyte represents a Quobyte mount on the host that shares a pod's lifetime
    // +optional
    Quobyte *QuobyteVolumeSource `json:"quobyte,omitempty" protobuf:"bytes,21,opt,name=quobyte"`
    // AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
    // +optional
    AzureDisk *AzureDiskVolumeSource `json:"azureDisk,omitempty" protobuf:"bytes,22,opt,name=azureDisk"`
    // PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine
    PhotonPersistentDisk *PhotonPersistentDiskVolumeSource `json:"photonPersistentDisk,omitempty" protobuf:"bytes,23,opt,name=photonPersistentDisk"`
    // Items for all in one resources secrets, configmaps, and downward API
    Projected *ProjectedVolumeSource `json:"projected,omitempty" protobuf:"bytes,26,opt,name=projected"`
    // PortworxVolume represents a portworx volume attached and mounted on kubelets host machine
    // +optional
    PortworxVolume *PortworxVolumeSource `json:"portworxVolume,omitempty" protobuf:"bytes,24,opt,name=portworxVolume"`
    // ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
    // +optional
    ScaleIO *ScaleIOVolumeSource `json:"scaleIO,omitempty" protobuf:"bytes,25,opt,name=scaleIO"`
    // StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.
    // +optional
    StorageOS *StorageOSVolumeSource `json:"storageos,omitempty" protobuf:"bytes,27,opt,name=storageos"`
    // CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
    // +optional
    CSI *CSIVolumeSource `json:"csi,omitempty" protobuf:"bytes,28,opt,name=csi"`
    // Ephemeral represents a volume that is handled by a cluster storage driver.
    // The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts,
    // and deleted when the pod is removed.
    //
    // Use this if:
    // a) the volume is only needed while the pod runs,
    // b) features of normal volumes like restoring from snapshot or capacity
    //    tracking are needed,
    // c) the storage driver is specified through a storage class, and
    // d) the storage driver supports dynamic volume provisioning through
    //    a PersistentVolumeClaim (see EphemeralVolumeSource for more
    //    information on the connection between this volume type
    //    and PersistentVolumeClaim).
    //
    // Use PersistentVolumeClaim or one of the vendor-specific
    // APIs for volumes that persist for longer than the lifecycle
    // of an individual pod.
    //
    // Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to
    // be used that way - see the documentation of the driver for
    // more information.
    //
    // A pod can use both types of ephemeral volumes and
    // persistent volumes at the same time.
    //
    // +optional
    Ephemeral *EphemeralVolumeSource `json:"ephemeral,omitempty" protobuf:"bytes,29,opt,name=ephemeral"`
}