Open GeneralFox opened 3 years ago
Creating a How To in dapr/docs repo for deploying to OpenShift would be a good outcome of this issue.
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.
Has anyone been able to deploy in openshift? I would like to give it a shot myself, but want to know if anyone has been successful already.
This is an old issue but i want share our experiences:
In our case open shift's olm manager service account was not in the list of allowed service accounts of dapr. We deployed it in openshift accepting this result. And didn't see any critical result. But dapr injector continues to produce error messages. We are not OCP expert but dapr behaves different in OCP. If OCP is in the target platforms for dapr, there should be an openshift documentation and seperate tests for dapr on OCP. Before OCP, we were using dapr in Rancher and didn't see such results. Now, i am not sure which permissions we granted for dapr in OCP. We can query this and give feedback in this isue.
We gave some permission for sidecar injection and dapr components working
@sedat-eyuboglu and @Admiralkheir thank you both for the OCP info!
Please keep updating on this issue and we'll create a Dapr on Openshift section on our docs.
A couple of other notes for OpenShift for the documentation
oc policy add-role-to-user system:openshift:scc:anyuid -z default
oc policy add-role-to-user cluster-admin -z dapr-operator
Installation on ocp should not be too difficult as ocp supports now
apiVersion: apps/v1
kind: Deployment
I did a test on ocp 4.11 successfully
oc new-project dapr
#oc policy add-role-to-user system:openshift:scc:anyuid -z dapr
helm upgrade --install dapr dapr/dapr --version=1.9.5 -n dapr
oc get pods
NAME READY STATUS RESTARTS AGE
dapr-dashboard-79655c5d74-c9kz9 1/1 Running 0 107s
dapr-operator-867f4f767d-zp6q4 1/1 Running 0 107s
dapr-placement-server-0 1/1 Running 0 107s
dapr-sentry-67f95947f7-8cm6z 1/1 Running 0 107s
dapr-sidecar-injector-57b975c449-gcq6x 1/1 Running 0 107s
To access the dashboard, simply create an ingress resource
HOST=<HOST.DOMAIN-NAME>
kubectl create ingress -n dapr dapr --rule="dapr.$HOST/*=dapr-dashboard:8080"
echo "http://$(oc get ingress/dapr -o json | jq -r '.spec.rules[].host')"
$ http://<HOST>
Next, I did a test using dapr quickstart Hello-kubernetes
git clone https://github.com/dapr/quickstarts.git && cd quickstarts/tutorials/hello-kubernetes
helm install redis bitnami/redis -n dapr --set master.podSecurityContext.enabled=false --set master.containerSecurityContext.enabled=false
oc apply -f ./deploy/redis.yaml
oc apply -f ./deploy/node.yaml
oc rollout status deploy/nodeapp
NODEAPP_URL=nodeapp.<HOST.DOMAIN-NAME>
kubectl create ingress nodeapp --rule="$NODEAPP_URL/*=nodeapp:80"
curl $NODEAPP_URL/ports
curl --request POST --data "@sample.json" --header Content-Type:application/json http://${NODEAPP_URL}/neworder
curl http://${NODEAPP_URL}/order
I did a new end to end test using the following bash scripts and commands
git clone https://github.com/ch007m/my-dapr.git && cd my-dapr
HOST_VM_IP=<HOST.DOMAIN_NAME> ./ocp/setup-dapr.sh
HOST_VM_IP=<HOST.DOMAIN_NAME> ./ocp/demo_order.sh
NOTE: To clean just do
oc project delete dapr
Apparently we only need (till we better define it) to add this roleBinding
oc policy add-role-to-user system:openshift:scc:anyuid -z dapr-operator
and we have also to figure out how to fix such WARNINGS reported during the creation of some pods
Warning: would violate PodSecurity "restricted:v1.24":
allowPrivilegeEscalation != false (container "node" must set securityContext.allowPrivilegeEscalation=false),
unrestricted capabilities (container "node" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "node" must set securityContext.runAsNonRoot=true),
seccompProfile (pod or container "node" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Warnings can be avoided if you deploy Dapr differently:
values.yaml
dapr_dashboard:
runAsNonRoot: true
dapr_placement:
runAsNonRoot: true
dapr_operator:
runAsNonRoot: true
dapr_sentry:
runAsNonRoot: true
dapr_sidecar_injector:
runAsNonRoot: true
global:
logAsJson: true
helm install dapr dapr/dapr --namespace dapr-system --values values.yml --wait
Not sure that all the pods can be launched as non root as the sidecar container is no injected within by example the nodeapp pod and sidecar_injector reports 2023/02/06 18:45:00 http: TLS handshake error from 172.17.33.19:38358: remote error: tls: bad certificate
@berndverst
You'll have to play around with these settings - I am not familiar with the nuances of running as nonRoot and with pod security policies.
We cannot use the debug enabled
param of the dapr sidecar injector
as deployment resource is reporting such an error
pods "dapr-sidecar-injector-6bf4997c8-" is forbidden: unable to validate against any security context constraint:
- [spec.containers[0].securityContext.capabilities.add: Invalid value: "SYS_PTRACE": capability may not be added,
- spec.containers[0].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000900000, 1000909999],
- provider "restricted": Forbidden: not usable by user or serviceaccount,
- provider "ibm-restricted-scc": Forbidden: not usable by user or serviceaccount,
- provider "nonroot-v2": Forbidden: not usable by user or serviceaccount,
- provider "nonroot": Forbidden: not usable by user or serviceaccount,
- provider "ibm-anyuid-scc": Forbidden: not usable by user or serviceaccount ...
NOTE: This problem can be fixed if we add to SCC anyuid the missing capability SYS_TRACE
. We got nevertheless another error Error: container create failed: time="2023-02-07T00:31:27-06:00" level=error msg="runc create failed: unable to start container process: exec: \"/dlv\": stat /dlv: no such file or directory"
@cmoulliard can you uninstall Dapr and delete the namespace you installed it into? Then try to install again. It looks like there's a bad/missing TLS cert in the control plane namespace.
After you reinstall, if the same error repeats, can you paste the output of "kubectl get secrets -n dapr-system"? (If you installed in a different namespace, replace accordingly)
After you reinstall, if the same error repeats,
I cannot reproduce it this morning. So the only issue that we still have to fix is to setup a ocp scc able to run the pod in debug mode
Not sure that all the pods can be launched as non root as the sidecar container is no injected within by example the nodeapp pod and sidecar_injector reports
2023/02/06 18:45:00 http: TLS handshake error from 172.17.33.19:38358: remote error: tls: bad certificate
@berndverst
FYI I ran into this issue today also - but in k3s (k3d) with the default helm chart settings.
This seems unrelated.
sidecar_injector reports
2023/02/06 18:45:00 http: TLS handshake error from 172.17.33.19:38358: remote error: tls: bad certificate
I suggest to open a ticket to investigate separately (from this ticket) the TLS handshake issue as this is really painful and should be fixed as the error is blocking. WDYT ? @berndverst
/area operator
Hello to all, I'm trying to install dapr for our development team under OpenShfit 4.7 with the helm chart. Out of the box the chart doesn't work for OCP security enforcement however with some tuning on the permission ( oc adm policy add-scc-to-user nonroot -z dapr-operator -n dapr-system ) I was able to deploy the chart but I have failure on dapr-placement server:
create Pod dapr-placement-server-0 in StatefulSet dapr-placement-server failed error: pods "dapr-placement-server-0" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000700000, 1000709999] spec.containers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]
The simpler way could be grant anyuid to dapr-operator service account but I've test that on a small test environment and this cause cluster failure ( control plane failure, operator redeploy, instability etc etc. ) that I'm currently investigating, but looking at the helm chart I've found in dapr_placement_deployment.yaml those line:
{{- if eq .Values.cluster.forceInMemoryLog true }} runAsNonRoot: {{ .Values.runAsNonRoot }} {{- else }} runAsUser: 0
and I'm wondering if this is wanted.
Anyone else has tryed a deploy on OCP and can share some experience ?