Closed RaccoonForever closed 11 months ago
@RaccoonForever API payload for jobs submit call is different from the one returned for a job.
You can look at expected payload here: https://docs.databricks.com/api/workspace/jobs/submit
In particular, it does not support job_clusters
in the payload, cluster configuration should be included within tasks
definition.
If instead you want to trigger a run of existing job, you can use databricks jobs run-now
command instead
workflow.json
Describe the issue
I'm currently trying to execute a run only once job using the CLI: databricks jobs submit --json XXXX
It gives me the error "Error: One of job_cluster_key, new_cluster, or existing_cluster_id must be specified.":
Something to note is that if I change the cluster of my job, to be an existing interactive cluster, it works. That's why I guess it might be a problem.
I thought it was linked to: https://github.com/databricks/cli/issues/992 but since everything is quoted in spark_conf.
Edit: added the JSON file
Steps to reproduce the behavior
How did I generate the JSON:
./databricks jobs get JOB_ID -o json | jq .settings > workflow.json
My JSON file:
Then what do I execute to submit:
./databricks.exe jobs submit --json '@workflow.json'
Expected Behavior
I guess it should launch a job with my configuration.
Actual Behavior
It gives me the error "Error: One of job_cluster_key, new_cluster, or existing_cluster_id must be specified."
OS and CLI version
Windows with Git Bash. Databricks CLI: 0.210.1
Is this a regression?
No idea.
Debug Logs
Output logs if you run the command with debug logs enabled. Example: databricks clusters list --log-level=debug.