Open Mrart opened 4 years ago
@functicons I think it is very need issue in our product env ? What do you think about it?
While this feature may bring some convenience, I do have a few concerns:
1) The length of the log. If it's too long it could make the CR hard to read.
2) Because -n 100
doesn't guarantee to provide enough information, you might still want to check more logs with kubectl log **-job
.
3) It can be automated with a simply script, which puts all information about the cluster together, it doesn't have to be built into the operator itself and increase the complexity of the core operator.
Ok, I think It can do, when the job status is Failed? the log line will under 100 . In our env,I can do it use the kubernet client.But I think when the job Pod failed, We provide some info in the cr Will more convernience.
Okay, go ahead.
@functicons
2 .Because -n 100 doesn't guarantee to provide enough information, you might still want to check more logs with kubectl log **-job.
kubectl logs job/aa-job --tail=100'
can do this.
What's your question?
When I start a flinkclusters with a error jar. The job pods submit to jm error, and I can user 'kubectl log -job' to get the error logs. But I think it not convenient。 So can we set the log to job status when the job failed or start?
If we do this , I can use 'kubectl get flinkcluster xxx -o json' to get the detail error log. It seems can ‘kubectl log -job -n 100’.