thecodeteam / roadmap

The {code} Team Roadmap
3 stars 1 forks source link

[Proposal] Fix ScaleIO secrets to support multi-tenancy #202

Closed vladimirvivien closed 6 years ago

vladimirvivien commented 6 years ago

Rationale

A ScaleIO storage class in Kubernetes uses a secret object to store sensitive credentials to access the ScaleIO storage cluster. Currently, the secret can be made to be part of any arbitrary namespace. While flexible, this poses some security implications for ScaleIO access as outlined below:

1) Flat ScaleIO credential structure ScaleIO uses a simple (flat) access model to its system. When a system account is setup to access ScaleIO, that account provides unfettered access to other ScaleIO resources. This makes it critical to protect access to the ScaleIO service account and restrict it from general access.

2) Possible leak of ScaleIO access When a Kubernetes secret is associated with a ScaleIO service account, that secret points to all ScaleIO resources accessible by the account (see item 1). This exposes a security gap between ScaleIO and Kubernetes. Since secrets can be created in any namespace, this becomes a potential where ScaleIO-related secrets may leak to Kubernetes user roles that should not have access to ScaleIO.

Solution

The proposed solution is to add an additional parameter to the volume spec that specifies the namespace for a secret. For instance, the following ScaleIO StorageClass definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sio-small
provisioner: kubernetes.io/scaleio
parameters:
  gateway: https://localhost:443/api
  system: scaleio
  protectionDomain: pd01
  storagePool: sp01
  secretRef: sio-secret
  secretNamespace: secret-ns
  fsType: xfs

Notice the addition of the new parameter secretNamespace used to explicitly specify the namespace of the secret object. If secretNamespace is omitted, the ScaleIO plugin will automatically default to the namespace of the PVC when a claim is created against the storage class.

PVs that are declared in-line within a pod declaration will not have that flexibility as it is required for the PV to use the same namespace as the pod where it is declared.