kubernetes-retired / cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Apache License 2.0
301 stars 67 forks source link

Need Resource Syncing Policy #265

Closed weiling61 closed 2 years ago

weiling61 commented 2 years ago

User Story

As an operator, I would like to control which resource needs to be synced to tenant virtual cluster for security and billing purpose.

Detailed Description

Issue and Requirement

Policy Provision

Proposal 1: Use Configmap for Resource Syncing Policy

  1. Per tenant syncing policy consists of an allowed resource list for a tenant
  2. Store tenant syncing policy in Configmap for each virtual cluster
  3. Deploy the configmap in virtualcluster control plane.
  4. Resource syncer reads the configmap of tenant and perform syncing accordingly.
apiVersion: v1
kind: ConfigMap
metadata:
  name: vc-sample-1
  namespace: default-532c0e-vc-sample-1
data:
  <api-group>.allowed: |
    <resource kind>=<resource-instance-name> 
  v1.allowed: |
    runtimeclass=microvm
    runtimeclass=kata
    storageclass=local-storage
  scheduling.k8s.io.allowed: |
    priorityclasses``=``p1
    priorityclasses=``p2`

Proposal 2: Create CRD for Resource Syncing Policy

  1. Create a new CRD for resource syncing policy
  2. Super cluster admin needs to create a CR for each tenant to provision specific sync policy
  3. The CR is deployed in virtual cluster control plane.
  4. Syncer reads the CR and perform the syncing based on tenant’s policies.
type SyncerPolicy struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Rules []PolicyRule 
}

type SyncRule struct {

 // Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.
// Currently only Deny or Allow are used
 Verbs []string `json:"verbs" protobuf:"bytes,1,rep,name=verbs"`

 // APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of
 // the enumerated resources in any API group will be allowed.
 // +optional
 APIGroups []string `json:"apiGroups,omitempty" protobuf:"bytes,2,rep,name=apiGroups"`

// Resources is a list of resources this rule applies to. '*' represents all resources.
 // +optional
 Resources []string `json:"resources,omitempty" protobuf:"bytes,3,rep,name=resources"` 
}

Policy Handling

Option 1: Direct Access

By using the same name of virtual cluster, resource syncer can access the policy (in form of CR or configmap) directly. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.

Option 2: Bind Policy to Virtualcluster CR

In this approach, a new attribute --- ClusterSyncPolicy will be added into VirtualClusterSpec to specify a predefined syncPolicy name or configmap name. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.

type VirtualClusterSpec struct {
   ...
   `ClusterSyncPolicy string`
   ...
}

A cache will be created in each virtual cluster domain to facilitate policy access from syncer.

weiling61 commented 2 years ago

@christopherhein @Fei-Guo

Fei-Guo commented 2 years ago

I am not fully convince the need of making syncing super resources be per-tenant basis. We can discuss it in community meeting.

christopherhein commented 2 years ago

An update, we discussed this a handful of weeks ago and the consensus was this makes sense in the case that you want to expose "platform/super cluster" features to specific tenants.

Implementation wise:

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/265#issuecomment-1262774911): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.