Open varunbpatil opened 3 years ago
I'm having the same crap. As far as I see, there was a fix here https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/716/files But then it was cancelled by this PR https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1111/files
AFAIK the operator defines the executor status depending on indirect signs but not on executors exit code. Poor logic, but maybe they know smth, what we don't
Adding @ImpSy , the author of https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1111/files for comments. It seems like a pretty big blocker that executor state is pretty much always FAILED.
May run into the same issue. The job will keep running even after the Spark Application job is complete. Can someone please revert the changes? :)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.
/reopen
@ChenYi015: Reopened this issue.
This is unrelated to #671 because I don't see any log of the following type in the spark operator logs.
VERSIONS
Operator = v1beta2-1.2.3-3.1.1 Spark = 3.1.1 Python = 3.8
APPLICATION
examples/spark-py-pi.yaml
DRIVER LOGS
EXECUTOR LOGS
OPERATOR LOGS