Closed seansund closed 2 years ago
This is happening because the job eventually gets removed from the cluster after it successfully completes (based on the ttlSecondsAfterFinished
value set to 300s). When the destroy happens later it is looking for this job and since it can't find it, returns an error.
The logic to look for the job is done in an external data source which will be called every time plan is called after the first apply and will cause issues in more than just the destroy case.
The logic can be changed to wait a little while for the job and if the job is never created to return without error. If the job is created then the process will keep the current logic of waiting for the job to complete successfully and returning an error if it does not complete successfully.