The open-source platform for Visualize Kubernetes & DevSecOps Workflows
Visualize Kubernetes & DevSecOps Workflows. Tracks changes/events real-time across your entire K8s clusters, git repos, container registries, Container image Vulnerability scanning, misconfiguration, SBOM etc. , analyzing their effects and providing you with the context you need to troubleshoot efficiently. Get the Observability you need, easily.
KubViz client can be installed on any Kubernetes cluster. KubViz agent runs in a kubernetes cluster where the changes/events need to be tracked. The agent detects the changes in real time and send those events via NATS JetStream and the same is received in the KubViz client.
KubViz client receives the events and passes it to Clickhouse database. The events present in the Clickhouse database can be visualized through Grafana.
KubViz's event tracking component provides comprehensive visibility into the changes and events occurring within your Kubernetes clusters.
KubViz offers a seamless integration with Git repositories, empowering you to effortlessly track and monitor changes that occur within your codebase. By capturing events such as commits, merges, and other Git activities.
KubViz also monitors changes in your container registry, providing visibility into image updates. By tracking these changes, KubViz helps you proactively manage container security and compliance.
It comprehensively scans Kubernetes containers for security flaws, such as vulnerabilities and misconfigurations, and creates an SBOM (Software Bill of Materials).
This command will create a new namespace for your cluster.
kubectl create namespace kubviz
helm repo add kubviz https://intelops.github.io/kubviz/
helm repo update
The following command will generate a token. Please make sure to take note of this token as it will be used for both client and agent installation purposes.
token=$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
helm upgrade -i kubviz-client kubviz/client -n kubviz --set "nats.auth.token=$token"
NOTE:
NOTE:
If you want to enable Grafana with the client deployment, add --set grafana.enabled=true
to the helm upgrade command.
Kubviz provides a setup for Grafana with Postgres data persistence, ensuring that even if the grafana pod/service goes down, the data will persist, safeguarding crucial information for visualization and analysis.
helm upgrade -i kubviz-client kubviz/client -n kubviz --set "nats.auth.token=$token" --set grafana.enabled=true --set grafana.postgresql=true
helm upgrade -i kubviz-client kubviz/client -n kubviz --set "nats.auth.token=$token" --set grafana.enabled=true
Parameter | Description | Default |
---|---|---|
grafana.enabled |
If true, create grafana | false |
grafana.postgresql |
If true, create postgresql | false |
The following command will retrieve the IP address. Please make sure to take note of this IP address as it will be used for agent installation if your agent is located in a different cluster.
kubectl get services kubviz-client-nats-external -n kubviz --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
NOTE:
helm upgrade -i kubviz-agent kubviz/agent -n kubviz \
--set "nats.auth.token=$token" \
--set git_bridge.enabled=true \
--set "git_bridge.ingress.hosts[0].host=<INGRESS HOSTNAME>",git_bridge.ingress.hosts[0].paths[0].path=/,git_bridge.ingress.hosts[0].paths[0].pathType=Prefix,git_bridge.ingress.tls[0].secretName=<SECRET-NAME>,git_bridge.ingress.tls[0].hosts[0]=<INGRESS HOSTNAME> \
--set container_bridge.enabled=true \
--set "container_bridge.ingress.hosts[0].host=<INGRESS HOSTNAME>",container_bridge.ingress.hosts[0].paths[0].path=/,container_bridge.ingress.hosts[0].paths[0].pathType=Prefix,container_bridge.ingress.tls[0].secretName=<SECRET-NAME>,container_bridge.ingress.tls[0].hosts[0]=<INGRESS HOSTNAME>
NOTE: If you want to get a token from a secret, use a secret reference with the secret's name and key.
Parameter | Description | Default |
---|---|---|
nats.host |
nats host | kubviz-client-nats |
git_bridge.enabled |
If true, create git_bridge | false |
git_bridge.ingress.hosts[0].host |
git_bridge ingress host name | gitbridge.local |
git_bridge.ingress.hosts[0].paths[0].path |
git_bridge ingress host path | / |
git_bridge.ingress.hosts[0].paths[0].pathType |
git_bridge ingress host path type | Prefix |
container_bridge.enabled |
If true, create container_bridge | false |
container_bridge.ingress.hosts[0].host |
container_bridge ingress host name | containerbridge.local |
container_bridge.ingress.hosts[0].paths[0].path |
container_bridge ingress host path | / |
container_bridge.ingress.hosts[0].paths[0].pathType |
container_bridge ingress host path type | Prefix |
git_bridge.ingress.tls |
git_bridge ingress tls configuration | [] |
container_bridge.ingress.tls |
container_bridge ingress tls configuration | [] |
NOTE:
By default, this Helm chart includes the following annotations for the git bridge and container bridge ingress resource:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-cluster
kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
...
If you do not want to use the default value, you can modify the annotation in values.yaml and execute the following command:
helm upgrade -i kubviz-agent kubviz/agent -f values.yaml -n kubviz
helm upgrade -i kubviz-agent kubviz/agent -n kubviz --set nats.host=<NATS IP Address> --set "nats.auth.token=$token"
NOTE:
The time-based job scheduler is added for each plugin, allowing you to schedule and automate the execution of plugins at specific times or intervals. To activate this scheduler, set 'enabled' to 'true.' Once enabled, each plugin's execution can be configured to run at a precise time or at regular intervals, based on the provided settings. Additionally, if you set the 'schedulingInterval' to '0', it will disable the plugins.
After completing the installation of both the client and agent, you can use the following command to verify if they are up and running.
kubectl get all -n kubviz
Once everything is up and running, you need to perform additional configurations to monitor git repository events and container registry events.
To ensure that these events are sent to KubViz, you need to create a webhook for your repository. This webhook will transmit the event data of the specific repository or registry to KubViz.
To set up a webhook in your repository, please follow these steps
kubectl get secret --namespace kubviz kubviz-client-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
export POD_NAME=$(kubectl get pods --namespace kubviz -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kubviz-client" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace kubviz port-forward $POD_NAME 3000
Mutual TLS (mTLS) is an extension of standard Transport Layer Security (TLS) that enhances security by requiring both the client and server to authenticate and verify each other's identities during the SSL/TLS handshake process. This mutual authentication helps ensure that both parties are who they claim to be, providing a higher level of security for sensitive data exchanges.
In our kubviz setup, we use mTLS for secure communication with the NATS server. Both the agent and the client connect to the NATS server using mTLS. The agent sends data to the NATS server securely, and the client also uses mTLS to receive data from the NATS server.
Enhanced Security: mTLS ensures that both the client and server are authenticated, mitigating the risk of man-in-the-middle attacks.
Data Integrity: By verifying identities, mTLS ensures that data is exchanged between trusted entities only.
Regulatory Compliance: For many industries, mTLS is a requirement for compliance with regulations that mandate secure communication.
To enable mTLS in your application for agent-to-NATS communication, follow these steps:
We've implemented a Time-To-Live (TTL) feature to streamline the management of data within your ClickHouse tables. With TTL, historical data can be automatically relocated to alternative storage or purged to optimize storage space. This feature is particularly valuable for scenarios like time-series data or logs where older data gradually loses its relevance over time.
The TTL value is customizable, empowering you to define the specific duration after which data is marked as 'expired'.
To guide you through the process of setting up a TTL, please follow these steps
KubViz enables you to perform cluster scans, image scans, and SBOM creation in CycloneDX format. Utilizing this scan, vulnerabilities can be identified.
You can customize the security scans by changing the chart values.
schedule:
enabled: true
trivyclusterscanInterval: 0
...
schedule:
enabled: true
trivyclusterscanInterval: "@every 24h"
...
Same you can change for image-scan and sbom
You can run different types of checks against your Kubernetes cluster to detect any issues or potential problems before they cause any downtime or service disruptions. Check will run in the background and sends data to kubviz. After analysing the data from dashboard you can take corrective action quickly, if any issues are detected.
Please check the configuration for health checks
Use KubViz to monitor your cluster events, including:
KubViz allows you to track and observe all the events in your git repository..
By capturing events such as commits, merges, and other Git activities, KubViz provides valuable insights into the evolution of your code. This comprehensive change tracking capability allows you to analyze the effects of code modifications on your development and deployment workflows, facilitating efficient collaboration among teams.With this feature, you can easily identify the root causes of issues, ensure code integrity, and maintain a clear understanding of the changes happening within your Git repositories
You are warmly welcome to contribute to KubViz. Please refer the detailed guide CONTRIBUTING.md.
Active communication channels
Refer the licence - LICENCE.