Closed FredrikAtIBM closed 3 months ago
I manually tested the dev branch against Red Hat OpenShift v4.16.2 now:
$ oc new-project instana-agent
$ oc adm policy add-scc-to-user privileged -z instana-agent -n instana-agent
# create the manual pull secret
$ cat pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: delivery-instana-io-pull-secret
namespace: instana-agent
data:
.dockerconfigjson: xxx
type: kubernetes.io/dockerconfigjson
$ kubectl apply -f pull-secret.yaml
...
# push yaml files
$ make deploy
...
# adjust default pull secret
$ kubectl patch serviceaccount instana-agent-operator -p '{"imagePullSecrets": [{"name": "delivery-instana-io-pull-secret"}]}'
# ensure to redeploy new pods with the updated pull secret
$ kubectl scale --replicas=0 deployment/controller-manager
$ kubectl scale --replicas=2 deployment/controller-manager
# ensure that operator starts up
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
controller-manager-6478cb4c6f-j6n4k 1/1 Running 0 2m50s
controller-manager-6478cb4c6f-pdxlm 1/1 Running 0 2m50s
# check logs that only a single controller-manager is elected as leader
# create a custom resource for the agent with multiple backends
$ cat instanaagent-ocp-konrad.yaml
apiVersion: instana.io/v1
kind: InstanaAgent
metadata:
name: instana-agent
namespace: instana-agent
spec:
zone:
name: konrad-ocp # (optional) name of the zone of the host
cluster:
name: konrad-ocp
agent:
key: xxx
endpointHost: ingress-red-saas.instana.io
endpointPort: "443"
additionalBackends:
- endpointHost: ingress-magenta-saas.instana.rocks
endpointPort: "443"
key: xxx
env: {}
configuration_yaml: |
# You can leave this empty, or use this to configure your instana agent.
# See https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/
# deploy custom resource
kubectl apply -f instanaagent-ocp-konrad.yaml
instanaagent.instana.io/instana-agent created
# check that pods are coming up correctly
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
controller-manager-6478cb4c6f-j6n4k 1/1 Running 0 11m
controller-manager-6478cb4c6f-pdxlm 1/1 Running 0 11m
instana-agent-k8sensor-8699848df8-4fl6b 1/1 Running 0 2m3s
instana-agent-k8sensor-8699848df8-mz4qb 1/1 Running 0 2m3s
instana-agent-k8sensor-8699848df8-nxbnb 1/1 Running 0 2m3s
instana-agent-m6kqp 1/1 Running 0 2m3s
instana-agent-mqc5t 1/1 Running 0 2m3s
instana-agent-q677z 1/1 Running 0 2m3s
Validate that the cluster is shown as Kubernetes Cluster on the main backend:
The cluster shows 3 worker nodes as expected:
On the second backend, the Kubernetes cluster is not visible, which is expected as the k8sensor is not yet capable to handle multiple backends.
But the host agents are shown correctly on the agents page: And I can navigate to the same host entry on the second backend:
The next test was to define the agent key as external secret instead of placing it in the custom resource:
$ cat instana-agent-key.yaml
apiVersion: v1
stringData:
key: xxx
kind: Secret
metadata:
labels:
app.kubernetes.io/component: instana-agent
app.kubernetes.io/instance: instana-agent
name: instana-agent-key
namespace: instana-agent
type: Opaque
$ kubectl apply -f instana-agent-key.yaml
secret/instana-agent-key created
# adjusted the CR
$ cat instanaagent-ocp-konrad.yaml
apiVersion: instana.io/v1
kind: InstanaAgent
metadata:
name: instana-agent
namespace: instana-agent
spec:
zone:
name: konrad-ocp # (optional) name of the zone of the host
cluster:
name: konrad-ocp
agent:
keysSecret: instana-agent-key
endpointHost: ingress-red-saas.instana.io
endpointPort: "443"
additionalBackends:
- endpointHost: ingress-magenta-saas.instana.rocks
endpointPort: "443"
key: xxx
env: {}
configuration_yaml: |
# You can leave this empty, or use this to configure your instana agent.
# See https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/
$ kubectl apply -f instanaagent-ocp-konrad.yaml
instanaagent.instana.io/instana-agent configured
# watch the pods being re-deployed
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
controller-manager-6478cb4c6f-j6n4k 1/1 Running 0 26m
controller-manager-6478cb4c6f-pdxlm 1/1 Running 0 26m
instana-agent-fpd8b 1/1 Running 0 46s
instana-agent-k8sensor-6f68bb856-4jrd7 1/1 Running 0 43s
instana-agent-k8sensor-6f68bb856-l2rst 1/1 Running 0 46s
instana-agent-k8sensor-6f68bb856-w9bqx 1/1 Running 0 48s
instana-agent-ww2kb 1/1 Running 0 44s
instana-agent-xl6pg 1/1 Running 0 48s
Hosts are still reported on both sites.
Removing the additional-backends section all together via kubectl edit agent instana-agent
does change the backend config correctly and makes it disappear from the second site, even without restarting the pods.
Rebased on main
Add support for additional backends for host agent