Closed lixuna closed 2 years ago
Estimate of effort:
Running on a vanilla k3s install.
First issue was running cnf_setup but it still installed coredns pod:
root@cnfqa1:~/v0.28.1# ./cnf-testsuite cnf_setup cnf-config=./cnf-testsuite.yml
Successfully created directories for cnf-testsuite
cnf-testsuite namespace already exists on the Kubernetes cluster
cnf setup online mode
Successfully setup coredns
/usr/share/crystal/src/regex/match_data.cr:149:7 in 'parse'
/workspace/src/tasks/utils/utils.cr:388:8 in 'version_less_than'
/workspace/src/tasks/utils/cnf_manager.cr:984:8 in 'sample_setup'
/workspace/src/tasks/cnf_setup.cr:37:5 in '->'
/workspace/lib/sam/src/sam/execution.cr:20:7 in '__crystal_main'
/usr/share/crystal/src/crystal/main.cr:110:5 in 'main'
src/env/__libc_start_main.c:94:2 in 'libc_start_main_stage2'
Not a semantic version: "1.22.7k3s1"
k3s next error running workload during microservice category which appears to be reasonable_startup_time
test:
βοΈ PASSED: Image size is good π βοΈπ
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: Missing hash key: "data"
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /usr/share/crystal/src/hash.cr:1034:9 in '[]'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/src/tasks/workload/microservice.cr:209:20 in '->'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /usr/share/crystal/src/primitives.cr:266:3 in 'all_cnfs_task_runner'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/src/tasks/utils/task.cr:19:9 in 'task_runner'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/src/tasks/workload/microservice.cr:192:3 in '->'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/lib/sam/src/sam/task.cr:44:29 in 'call'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/lib/sam/src/sam/task.cr:44:29 in 'call'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /workspace/lib/sam/src/sam/execution.cr:20:7 in '__crystal_main'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: /usr/share/crystal/src/crystal/main.cr:110:5 in 'main'
E, [2022-04-22 23:15:57 +00:00 #80202] ERROR -- cnf-testsuite: src/env/__libc_start_main.c:94:2 in 'libc_start_main_stage2'
βοΈ PASSED: Only one process type used π βοΈπ
Deferred
Platforms that saw no major errors or issues:
Kind results for comparisons:
pair@cnfdev03:~/workspace/drew/cnf-testsuite$ ./cnf-testsuite cert
Compatibility, Installability & Upgradability Tests
Successfully created directories for cnf-testsuite
βοΈ PASSED: Helm Chart exported_chart Lint Passed βπβοΈ
βοΈ PASSED: Published Helm Chart Found βπ¦π
βοΈ SKIPPED: Helm Deploy
Global docker found. Version: 0.10.1
No Local docker version found
βοΈ PASSED: CNF compatible with both Calico and Cilium ππ
βοΈ PASSED: Replicas increased to 3 and decreased to 1 π¦π
Rolling update failed on resource: coredns-coredns and container: coredns
βοΈ FAILED: CNF Rollback Failed
Compatibility, installability, and upgradeability results: 115 of 11 tests passed
State Tests
βοΈ PASSED: hostPath volumes not found π₯οΈ πΎ
Rescued: On resource coredns-coredns of kind Service, local storage configuration volumes not found π₯οΈ πΎ
βοΈ PASSED: local storage configuration volumes not found π₯οΈ πΎ
βοΈ FAILED: Volumes used are not elastic volumes π§«
βοΈ SKIPPED: Mysql not installed π§«
βοΈ π PASSED: node_drain chaos test passed π‘οΈπβ»οΈ
State results: 110 of 5 tests passed
Security Tests
βοΈ PASSED: No privileged containers ππ
βοΈ SKIPPED: Skipping non_root_user: Falco failed to install. Check Kernel Headers are installed on the Host Systems(K8s).
βοΈ PASSED: No containers allow a symlink attack ππ
βοΈ FAILED: Found containers that allow privilege escalation ππ
container :coredns in Deployment: coredns-coredns allow privilege escalation
Remediation: If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false.
βοΈ PASSED: Containers with insecure capabilities were not found ππ
βοΈ PASSED: Containers with dangerous capabilities were not found ππ
βοΈ π FAILED: Found containers without resource limits defined ππ
there are no resource limits defined for container : kubectl-label
Remediation: Define LimitRange and ResourceQuota policies to limit resource usage for namespaces or nodes.
βοΈ FAILED: Found resources that do not use security services ππ
Workload: coredns-coredns does not define any linux security hardening
Remediation: You can use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.
βοΈ FAILED: Ingress and Egress traffic not blocked on pods ππ
Deployment: coredns-coredns has Pods which don't have ingress/egress defined
Job: gatekeeper-update-namespace-label has Pods which don't have ingress/egress defined
Remediation: Define a network policy that restricts ingress and egress connections.
βοΈ PASSED: No containers with hostPID and hostIPC privileges ππ
βοΈ π PASSED: Containers are running with non-root user with non-root group membership ππ
βοΈ π PASSED: No privileged containers were found ππ
βοΈ FAILED: Found namespaces which do not have network policies defined ππ
no policy is defined for namespace default
Remediation: Define network policies or use similar network protection mechanisms.
βοΈ FAILED: Found containers with mutable file systems ππ
container :coredns in Deployment: coredns-coredns has mutable filesystem
Remediation: Set the filesystem of the container to read-only when possible (POD securityContext, readOnlyRootFilesystem: true). If containers application needs to write into the filesystem, it is recommended to mount secondar
y filesystems for specific directories where application require write access.
βοΈ PASSED: Containers do not have hostPath mounts ππ
βοΈ π PASSED: Container engine daemon sockets are not mounted as volumes ππ
βοΈ PASSED: Services are not using external IPs ππ
βοΈ π N/A: Pods are not using SELinux ππ
βοΈ PASSED: No restricted values found for sysctls ππ
βοΈ PASSED: No host network attached to pod ππ
βοΈ FAILED: Service accounts automatically mapped ππ
the following service account: default in the following namespace: default mounts service account tokens in pods by default
the following service account: disk-fill-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: node-drain-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-delete-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-io-stress-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-memory-hog-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-network-corruption-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-network-duplication-sa in the following namespace: default mounts service account tokens in pods by default
the following service account: pod-network-latency-sa in the following namespace: default mounts service account tokens in pods by default
Remediation: Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes pre
cedence.
βοΈ PASSED: No applications credentials in configuration files ππ
Security results: 445 of 21 tests passed
Configuration Tests
βοΈ PASSED: No IP addresses found
βοΈ PASSED: NodePort is not used
βοΈ π PASSED: HostPort is not used
βοΈ π PASSED: No hard-coded IP addresses found in the runtime K8s configuration
No Secret Volumes or Container secretKeyRefs found for resource: {kind: "Deployment", name: "coredns-coredns", namespace: "default"}
β SKIPPED: Secrets not used π§«
To address this issue please see the USAGE.md documentation
βοΈ FAILED: Found mutable configmap(s) βοΈ
βοΈ SKIPPED: alpha_k8s_apis
βοΈ FAILED: Pods should have the app.kubernetes.io/name label. π·οΈβ
Job gatekeeper-update-namespace-label in default namespace failed. validation error: The label `app.kubernetes.io/name` is required. Rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/
βοΈ π PASSED: Container images are not using the latest tag π·οΈβοΈ
βοΈ FAILED: Resources are created in the default namespace π·οΈβ
Pod coredns-coredns-6fc69fdfd7-hkql6 in default namespace failed. validation error: Using 'default' namespace is not allowed. Rule validate-namespace failed at path /metadata/namespace/
Job gatekeeper-update-namespace-label in default namespace failed. validation error: Using 'default' namespace is not allowed for pod controllers. Rule validate-podcontroller-namespace failed at path /metadata/namespace/
Deployment coredns-coredns in default namespace failed. validation error: Using 'default' namespace is not allowed for pod controllers. Rule validate-podcontroller-namespace failed at path /metadata/namespace/
βοΈ PASSED: Image uses a versioned tag π·οΈβοΈ
Configuration results: 310 of 11 tests passed
Observability and Diagnostics Tests
βοΈ π PASSED: Resources output logs to stdout and stderr πΆβ οΈ
βοΈ SKIPPED: Prometheus server not found πΆβ οΈ
βοΈ SKIPPED: Prometheus traffic not configured πΆβ οΈ
βοΈ SKIPPED: Fluentd not configured πΆβ οΈ
βοΈ SKIPPED: Jaeger not configured βπ
Observability and diagnostics results: 100 of 5 tests passed
Microservice Tests
βοΈ PASSED: Image size is good π βοΈπ
βοΈ PASSED: CNF had a reasonable startup time π
βοΈ π PASSED: Only one process type used π βοΈπ
βοΈ PASSED: Some containers exposed as a service π βοΈπ
βοΈ SKIPPED: [shared_database] No MariaDB containers were found
Microservice results: 115 of 5 tests passed
Reliability, Resilience, and Availability Tests
βοΈ PASSED: pod_network_latency chaos test passed π‘οΈπβ»οΈ
βοΈ PASSED: pod_network_corruption chaos test passed π‘οΈπβ»οΈ
βοΈ PASSED: disk_fill chaos test passed π‘οΈπβ»οΈ
βοΈ PASSED: pod_delete chaos test passed π‘οΈπβ»οΈ
βοΈ PASSED: pod_memory_hog chaos test passed π‘οΈπβ»οΈ
βοΈ PASSED: pod_io_stress chaos test passed π‘οΈπβ»οΈ
βοΈ SKIPPED: pod_dns_error docker runtime not found π‘οΈπβ»οΈ
βοΈ PASSED: pod_network_duplication chaos test passed π‘οΈπβ»οΈ
βοΈ π PASSED: Helm liveness probe found βπ§«
βοΈ π PASSED: Helm readiness probe found βπ§«
Reliability, resilience, and availability results: 235 of 10 tests passed
RESULTS SUMMARY
- 39 of 68 total tests passed
- 13 of 13 essential tests passed
Results have been saved to results/cnf-testsuite-results-20220505-113600-398.yml
Peer Reviews will be in each of the following issues (this EPIC has no AC):
Kubespray https://github.com/cncf/cnf-testsuite/issues/1161 K3s - Issue # https://github.com/cncf/cnf-testsuite/issues/1402 Microk8s - Issue # https://github.com/cncf/cnf-testsuite/issues/1403 AWS - Issue # https://github.com/cncf/cnf-testsuite/issues/1404 Azure - Issue # https://github.com/cncf/cnf-testsuite/issues/1406 Google - Issue # https://github.com/cncf/cnf-testsuite/issues/1405
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
List major platforms for K8s