Closed jkremser closed 5 years ago
@elmiko could you PTAL?
i tested this and ran into something that i'm not sure about.
steps for testing
export CRD=false
the default worked as expected, here is the output from that run:
$ echo $CRD
[mike@opb-ultra] master ~/workspace/spark-operator
$ java -jar target/spark-operator-0.3.6-SNAPSHOT.jar
2019-03-15 09:58:40 INFO Entrypoint:56 - Starting..
2019-03-15 09:58:41 INFO Entrypoint:215 - https://shift.opb.studios:8443/oapi returned 200. We are on OpenShift.
2019-03-15 09:58:41 INFO Manifests:193 - 10 attributes loaded from 1 stream(s) in 2ms, 10 saved, 0 ignored: ["Build-Jdk", "Built-By", "Created-By
", "Implementation-Build", "Implementation-Title", "Implementation-URL", "Implementation-Vendor-Id", "Implementation-Version", "Main-Class", "Manifest-Version"]
2019-03-15 09:58:41 INFO Entrypoint:236 -
Operator has started in version 0.3.6-SNAPSHOT.
2019-03-15 09:58:41 INFO Entrypoint:239 - Git sha: d5e35a42
2019-03-15 09:58:41 INFO Entrypoint:241 - ==================
2019-03-15 09:58:41 INFO Entrypoint:74 - OpenShift environment detected.
2019-03-15 09:58:42 INFO AbstractOperator:246 - Starting 'SparkApplication' operator for namespace abs
2019-03-15 09:58:42 INFO Entrypoint:174 - full reconciliation for 'SparkApplication' operator scheduled (periodically each 180 seconds)
2019-03-15 09:58:42 INFO Entrypoint:175 - the first full reconciliation for 'SparkApplication' operator is happening in 2 seconds
2019-03-15 09:58:42 INFO AbstractOperator:246 - Starting 'SparkCluster' operator for namespace abs
2019-03-15 09:58:42 INFO AbstractOperator:42 - SparkCluster operator default spark image = quay.io/jkremser/openshift-spark:2.4.0
2019-03-15 09:58:42 INFO AbstractOperator:264 - 'SparkApplication' operator running for namespace abs
2019-03-15 09:58:42 INFO Entrypoint:154 - 'SparkApplication' operator started in namespace abs
2019-03-15 09:58:42 INFO AbstractWatcher:157 - CustomResource watcher running for kinds SparkApplication
2019-03-15 09:58:42 INFO Entrypoint:174 - full reconciliation for 'SparkCluster' operator scheduled (periodically each 180 seconds)
2019-03-15 09:58:42 INFO Entrypoint:175 - the first full reconciliation for 'SparkCluster' operator is happening in 3 seconds
2019-03-15 09:58:42 INFO AbstractOperator:264 - 'SparkCluster' operator running for namespace abs
2019-03-15 09:58:42 INFO Entrypoint:154 - 'SparkCluster' operator started in namespace abs
2019-03-15 09:58:42 INFO AbstractWatcher:157 - CustomResource watcher running for kinds SparkCluster
2019-03-15 09:58:45 INFO AbstractOperator:96 - Running full reconciliation for namespace abs and kind SparkCluster..
2019-03-15 09:58:46 INFO AbstractOperator:166 - no change was detected during the reconciliation
but when i set export CRD=false
i was expecting to see the output confirm that the operator was looking for ConfigMap. instead the output is the same as for CRD.
$ export CRD=false
[mike@opb-ultra] master ~/workspace/spark-operator
$ java -jar target/spark-operator-0.3.6-SNAPSHOT.jar
2019-03-15 09:59:03 INFO Entrypoint:56 - Starting..
2019-03-15 09:59:04 INFO Entrypoint:215 - https://shift.opb.studios:8443/oapi returned 200. We are on OpenShift.
2019-03-15 09:59:04 INFO Manifests:193 - 10 attributes loaded from 1 stream(s) in 2ms, 10 saved, 0 ignored: ["Build-Jdk", "Built-By", "Created-By
", "Implementation-Build", "Implementation-Title", "Implementation-URL", "Implementation-Vendor-Id", "Implementation-Version", "Main-Class", "Mani
fest-Version"]
2019-03-15 09:59:04 INFO Entrypoint:236 -
Operator has started in version 0.3.6-SNAPSHOT.
2019-03-15 09:59:04 INFO Entrypoint:239 - Git sha: d5e35a42
2019-03-15 09:59:04 INFO Entrypoint:241 - ==================
2019-03-15 09:59:04 INFO Entrypoint:74 - OpenShift environment detected.
2019-03-15 09:59:06 INFO AbstractOperator:246 - Starting 'SparkApplication' operator for namespace abs
2019-03-15 09:59:06 INFO Entrypoint:174 - full reconciliation for 'SparkApplication' operator scheduled (periodically each 180 seconds)
2019-03-15 09:59:06 INFO Entrypoint:175 - the first full reconciliation for 'SparkApplication' operator is happening in 2 seconds
2019-03-15 09:59:06 INFO AbstractOperator:246 - Starting 'SparkCluster' operator for namespace abs
2019-03-15 09:59:06 INFO AbstractOperator:42 - SparkCluster operator default spark image = quay.io/jkremser/openshift-spark:2.4.0
2019-03-15 09:59:06 INFO AbstractOperator:264 - 'SparkApplication' operator running for namespace abs
2019-03-15 09:59:06 INFO Entrypoint:154 - 'SparkApplication' operator started in namespace abs
2019-03-15 09:59:06 INFO AbstractWatcher:157 - CustomResource watcher running for kinds SparkApplication
2019-03-15 09:59:06 INFO Entrypoint:174 - full reconciliation for 'SparkCluster' operator scheduled (periodically each 180 seconds)
2019-03-15 09:59:06 INFO Entrypoint:175 - the first full reconciliation for 'SparkCluster' operator is happening in 3 seconds
2019-03-15 09:59:06 INFO AbstractOperator:264 - 'SparkCluster' operator running for namespace abs
2019-03-15 09:59:06 INFO Entrypoint:154 - 'SparkCluster' operator started in namespace abs
2019-03-15 09:59:06 INFO AbstractWatcher:157 - CustomResource watcher running for kinds SparkCluster
2019-03-15 09:59:09 INFO AbstractOperator:96 - Running full reconciliation for namespace abs and kind SparkCluster..
2019-03-15 09:59:09 INFO AbstractOperator:166 - no change was detected during the reconciliation
good catch! I should have tested it better. It's because of the default value in the annotation parameter @Operator(...crd=true)
now, it should be fixed
the crd
parameter from the @Operator
annotation is used only if the env variable CRD
is empty
ack, i'll give it a test
thanks @elmiko
Description
If the
CRD
environment variable was empty, the default behavior was creating the watchers for config maps. This PR revers the logic, so that by default we create CR watchers. If user wants configmaps, he/she needs to explicitly specifyCRD=false
.Types of changes