Open kevinyu98 opened 4 years ago
iirc the spark-crd-operator
service account is only for the actual operator instance. the spark-operator
service account is used for the actions of deploying pods in your user project.
i have not tried to reproduce this yet, but i will try to take a look. thanks for reporting it!
looking at your screenshot, it appears you are trying to run the spark application in a namespace called operators
. i would double check to make sure that is the proper namespace, and that you have sufficient privileges to create these resources there.
Created Spark Operator with OLM, it will create this serviceaccount: spark-crd-operator. Then try to run spark application without specify the serviceaccount, this operator will try to get default one which is spark-operator. This will cause the application pod can't be created.
Description:
I deployed the operator through OLM console, then try to run the example from the OLM console.
After create, there is no pod created. from the
oc get events
oc get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 8m 2h 39 my-spark-app-submitter.15dda9bcc532da58 ReplicationController Warning FailedCreate replication-controller Error creating: pods "my-spark-app-submitter-" is forbidden: error looking up service account operators/spark-operator: serviceaccount "spark-operator" not found
here is my saoc get serviceaccount NAME SECRETS AGE builder 2 16d default 2 16d deployer 2 16d spark-crd-operator 2 9d
From the operator code, it seems if the spark application didn't provide the serviceaccount, it will use the default one, which spark-operator. Should we change the code in
manifest/olm/crd/sparkclusteroperator.1.0.1.clusterserviceversion.yaml
to usespark-operator
. In the olm configmap based yaml file, it is using the spark-operator.manifest/olm/configmap-based-all-in-one-csv.yaml
Steps to reproduce: