SovereignCloudStack / standards

SCS standards in a machine readable format
https://scs.community/
Creative Commons Attribution Share Alike 4.0 International
34 stars 23 forks source link

Enable compliance tests to use plugins for cluster provisioning #753

Open tonifinger opened 2 months ago

tonifinger commented 2 months ago

This PR introduces an interface that enables the provision of kubeconfig files for the sonobuoy test framework. Each Kubernetes provider must derive its own specific plugin from this interface class in order to provide a cluster for testing

mbuechse commented 1 month ago

@tonifinger Can you please post what a run would look like in the shell? (Just paste something from your terminal.)

tonifinger commented 1 month ago

@tonifinger Can you please post what a run would look like in the shell? (Just paste something from your terminal.)

This is the output to the shell using the plugin of kind and the logging lvl set to info:

No kind clusters found.
INFO:root:Creating cluster scs-cluster..
Creating cluster "scs-cluster" ...
 βœ“ Ensuring node image (kindest/node:v1.25.3) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦  
 βœ“ Writing configuration πŸ“œ 
 βœ“ Starting control-plane πŸ•ΉοΈ 
 βœ“ Installing CNI πŸ”Œ 
 βœ“ Installing StorageClass πŸ’Ύ 
Set kubectl context to "kind-scs-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-scs-cluster --kubeconfig .pytest-kind/scs-cluster/kubeconfig

Have a nice day! πŸ‘‹
INFO:interface:check kubeconfig
INFO:interface:kubeconfigfile loaded successfully
Sonobuoy Version: v0.56.16
MinimumKubeVersion: 1.17.0
MaximumKubeVersion: 1.99.99
GitSHA: c7712478228e3b50a225783119fee1286b5104af
GoVersion: go1.19.10
Platform: linux/amd64
API Version:  v1.25.3
INFO:interface: invoke cncf conformance test
INFO[0000] create request issued                         name=sonobuoy namespace= resource=namespaces
INFO[0000] create request issued                         name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
INFO[0000] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
INFO[0000] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
INFO[0000] create request issued                         name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
INFO[0000] create request issued                         name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
INFO[0000] create request issued                         name=sonobuoy namespace=sonobuoy resource=pods
INFO[0000] create request issued                         name=sonobuoy-aggregator namespace=sonobuoy resource=services
14:08:41          PLUGIN                        NODE    STATUS   RESULT   PROGRESS
14:08:41    systemd-logs   scs-cluster-control-plane   running                    
14:08:41             e2e                      global   running                    
14:08:41 
14:08:41 Sonobuoy is still running. Runs can take 60 minutes or more depending on cluster and plugin configuration.
...
...
14:09:41    systemd-logs   scs-cluster-control-plane   complete                    
...
14:10:21    systemd-logs   scs-cluster-control-plane   complete   passed                         
14:10:21             e2e                      global   complete   passed   Passed:960, Failed:  0
14:10:21 Sonobuoy has completed. Use `sonobuoy retrieve` to get results.
INFO:interface: 1094 passed, 5976 failed of which 5976 were skipped
INFO:interface:removing sonobuoy tests from cluster
INFO[0000] delete request issued                         kind=namespace namespace=sonobuoy
INFO[0000] delete request issued                         kind=clusterrolebindings
INFO[0000] delete request issued                         kind=clusterroles

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[]}

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[{Type:NamespaceDeletionDiscoveryFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ResourcesDiscovered Message:All resources successfully discovered} {Type:NamespaceDeletionGroupVersionParsingFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ParsedGroupVersions Message:All legacy kube types successfully parsed} {Type:NamespaceDeletionContentFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentDeleted Message:All content successfully deleted, may be waiting on finalization} {Type:NamespaceContentRemaining Status:True LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:SomeResourcesRemain Message:Some resources are remaining: pods. has 2 resource instances} {Type:NamespaceFinalizersRemaining Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentHasNoFinalizers Message:All content-preserving finalizers finished}]}

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[{Type:NamespaceDeletionDiscoveryFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ResourcesDiscovered Message:All resources successfully discovered} {Type:NamespaceDeletionGroupVersionParsingFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ParsedGroupVersions Message:All legacy kube types successfully parsed} {Type:NamespaceDeletionContentFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentDeleted Message:All content successfully deleted, may be waiting on finalization} {Type:NamespaceContentRemaining Status:True LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:SomeResourcesRemain Message:Some resources are remaining: pods. has 1 resource instances} {Type:NamespaceFinalizersRemaining Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentHasNoFinalizers Message:All content-preserving finalizers finished}]}
...
...
...

Namespace "sonobuoy" has been deleted

Deleted all ClusterRoles and ClusterRoleBindings.
INFO:interface:removing sonobuoy tests from cluster
INFO[0000] already deleted                               kind=namespace namespace=sonobuoy
INFO[0000] delete request issued                         kind=clusterrolebindings
INFO[0000] delete request issued                         kind=clusterroles

Namespace "sonobuoy" has been deleted

Deleted all ClusterRoles and ClusterRoleBindings.
INFO:root:Deleting cluster scs-cluster..
Deleting cluster "scs-cluster" ...
mbuechse commented 1 month ago

Please re-request review from me when you've reached the point.

tonifinger commented 1 month ago

Please re-request review from me when you've reached the point.

I have reached a point where you can test the first approach by using scs-test-runner.py to run the kaas tests on self provisioned k8s clusters.

You should be able to test this with the following command:

./scs-test-runner.py --config ./config-kaas-example.toml --debug run --preset="all-kaas" -o report.yaml --no-upload 

This PR is still in draft as I have to:

mbuechse commented 2 weeks ago

Just a quick update on ongoing work:

tonifinger commented 1 week ago

See my minor remarks. Apart from that, it does look good. Now it depends on thorough testing. And of course, we should merge the CAPI plugin before we can go ahead.

The Test with kind plugin gives the following result:

provisioning process with kind as plugin:

(venv)@ThinkPad:~/standards/Tests$ ./scs-test-runner.py --config ./config.toml --debug provision --preset="all-kaas"
DEBUG: running provision for subject(s) cspA-current, cspA-current-1, cspA-current-2, num_workers: 4
INFO: Init provider plug-in of type PluginKind
No kind clusters found.
INFO: Creating cluster current-k8s-release-1..
INFO: Init provider plug-in of type PluginKind
INFO: Init provider plug-in of type PluginKind
No kind clusters found.
INFO: Creating cluster current-k8s-release..
No kind clusters found.
INFO: Creating cluster current-k8s-release-2..
Creating cluster "current-k8s-release-2" ...
Creating cluster "current-k8s-release" ...
Creating cluster "current-k8s-release-1" ...
 βœ“ Ensuring node image (kindest/node:v1.30.4) πŸ–Ό
 βœ“ Ensuring node image (kindest/node:v1.29.8) πŸ–Ό
 βœ“ Ensuring node image (kindest/node:v1.31.1) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦ πŸ“¦  
 βœ“ Preparing nodes πŸ“¦ πŸ“¦  
 βœ“ Preparing nodes πŸ“¦ πŸ“¦ 
 βœ“ Writing configuration πŸ“œ 
 βœ“ Writing configuration πŸ“œ 
 βœ“ Writing configuration πŸ“œ
 βœ“ Starting control-plane πŸ•ΉοΈ 
 βœ“ Installing CNI πŸ”Œplane πŸ•ΉοΈ 
 βœ“ Starting control-plane πŸ•ΉοΈ   
 βœ“ Installing StorageClass πŸ’Ύ 
 βœ“ Starting control-plane πŸ•ΉοΈ
 βœ“ Installing CNI πŸ”Œdes 🚜 
 βœ“ Installing CNI πŸ”Œdes 🚜 πŸ’Ύ 
 βœ“ Installing StorageClass πŸ’Ύ 
 βœ“ Installing StorageClass πŸ’Ύ 
 βœ“ Joining worker nodes 🚜 
⠈⠁ Joining worker nodes 🚜 Set kubectl context to "kind-current-k8s-release"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release --kubeconfig .pytest-kind/current-k8s-release/kubeconfig

Thanks for using kind! 😊
 βœ“ Joining worker nodes 🚜 
⠊⠁ Joining worker nodes 🚜 Set kubectl context to "kind-current-k8s-release-1"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release-1 --kubeconfig .pytest-kind/current-k8s-release-1/kubeconfig

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community πŸ™‚
 βœ“ Joining worker nodes 🚜 
Set kubectl context to "kind-current-k8s-release-2"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release-2 --kubeconfig .pytest-kind/current-k8s-release-2/kubeconfig

Not sure what to do next? πŸ˜…  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

(venv)@ThinkPad:~/standards/Tests$ kind get clusters
current-k8s-release
current-k8s-release-1
current-k8s-release-2

run tests:

(venv)@ThinkPad:~/standards/Tests$ ./scs-test-runner.py --config ./config.toml --debug run --preset="all-kaas" --monitor-url localhost -o REPORT.yaml
DEBUG: running tests for scope(s) scs-compatible-kaas and subject(s) cspA-current, cspA-current-1, cspA-current-2
DEBUG: monitor url: localhost, num_workers: 4, output: REPORT.yaml
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-2/kubeconfig.yaml'...
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current/kubeconfig.yaml'...
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-1/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-1/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-2/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current/kubeconfig.yaml'...
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current-2 SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current-1 SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
Traceback (most recent call last):
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-test-runner.py", line 293, in <module>
    cli(obj=Config())
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/decorators.py", line 45, in new_func
    return f(get_current_context().obj, *args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-test-runner.py", line 228, in run
    subprocess.run(cfg.build_sign_command(report_yaml_tmp))
  File "/usr/lib/python3.10/subprocess.py", line 503, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.10/subprocess.py", line 1863, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'args'

Content of REPORT.yaml:

---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:28.794798
reference_date: 2024-11-06
subject: cspA-current
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: c3c96581-2e66-41db-8953-01a388d233ec
    node-distribution-check:
      result: -1
      invocation: 8d0592a5-dd14-45ec-97f9-5debb21b6f51
run:
  uuid: 10489580-03bc-4b20-8bba-d556d82e02e6
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-0.yaml
  - -s
  - cspA-current
  - -a
  - os_cloud=cspA-current
  - -a
  - subject_root=cspA-current
  assignment:
    os_cloud: cspA-current
    subject_root: cspA-current
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    c3c96581-2e66-41db-8953-01a388d233ec:
      id: c3c96581-2e66-41db-8953-01a388d233ec
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.31.1 of cluster ''kind-current-k8s-release''
        is already EOL.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    8d0592a5-dd14-45ec-97f9-5debb21b6f51:
      id: 8d0592a5-dd14-45ec-97f9-5debb21b6f51
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0
---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:29.040151
reference_date: 2024-11-06
subject: cspA-current-1
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: 64aa2078-154e-491a-b271-45daf3b2df8a
    node-distribution-check:
      result: -1
      invocation: 19ed2df3-3f70-4edf-b80d-ce2462e08f23
run:
  uuid: 499404b5-1e24-4139-8413-96f235eed565
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-1.yaml
  - -s
  - cspA-current-1
  - -a
  - os_cloud=cspA-current-1
  - -a
  - subject_root=cspA-current-1
  assignment:
    os_cloud: cspA-current-1
    subject_root: cspA-current-1
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    64aa2078-154e-491a-b271-45daf3b2df8a:
      id: 64aa2078-154e-491a-b271-45daf3b2df8a
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-1/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current-1/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.30.4 of cluster ''kind-current-k8s-release-1''
        is outdated according to the standard.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    19ed2df3-3f70-4edf-b80d-ce2462e08f23:
      id: 19ed2df3-3f70-4edf-b80d-ce2462e08f23
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-1/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0
---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:28.736436
reference_date: 2024-11-06
subject: cspA-current-2
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: 12be6326-2b8d-4ff8-83b3-fb0921077c03
    node-distribution-check:
      result: -1
      invocation: 62c58c79-6b32-4afc-bd2c-9db081061c22
run:
  uuid: b3c83d2c-2730-4954-aa40-fa40950e4f6a
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-2.yaml
  - -s
  - cspA-current-2
  - -a
  - os_cloud=cspA-current-2
  - -a
  - subject_root=cspA-current-2
  assignment:
    os_cloud: cspA-current-2
    subject_root: cspA-current-2
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    12be6326-2b8d-4ff8-83b3-fb0921077c03:
      id: 12be6326-2b8d-4ff8-83b3-fb0921077c03
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-2/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current-2/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.29.8 of cluster ''kind-current-k8s-release-2''
        is outdated according to the standard.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    62c58c79-6b32-4afc-bd2c-9db081061c22:
      id: 62c58c79-6b32-4afc-bd2c-9db081061c22
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-2/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0

Unprovision process:

(venv)@ThinkPad:~/standards/Tests$  ./scs-test-runner.py --config ./config.toml --debug unprovision --preset="all-kaas"
DEBUG: running unprovision for subject(s) cspA-current, cspA-current-1, cspA-current-2, num_workers: 4
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release..
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release-2..
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release-1..
Deleting cluster "current-k8s-release" ...
Deleting cluster "current-k8s-release-1" ...
Deleting cluster "current-k8s-release-2" ...
(venv)@ThinkPad:~/standards/Tests$  kind get clusters
No kind clusters found.