As the job_details, also referred to as "Job parameters" become more important for re-running and editability, it's becoming uncomfortable what flags are set for a Job in SQL, and which are outlined in job_details.
For example, a Job includes the boolean SQL column published. However, this is also outlined in job_details under the published in such a way that it can be edited before a re-run. Spark flags off the job_details for re-publishing/unpublishing.
Now that job_details are entirely JSON / python dictionary, conceivable they could be stored separate in Mongo instead of serializing/deserializing to SQL Job table. But, the disconnect continues. The logical extension might be moving the Job model -- and conceivably all? -- to Mongo. Editing JSON, with the nice JSON editor, is advanced, but doable, where GUI forms do not exist for modifying SQL columns.
As the
job_details
, also referred to as "Job parameters" become more important for re-running and editability, it's becoming uncomfortable what flags are set for a Job in SQL, and which are outlined injob_details
.For example, a Job includes the boolean SQL column
published
. However, this is also outlined injob_details
under thepublished
in such a way that it can be edited before a re-run. Spark flags off thejob_details
for re-publishing/unpublishing.Now that
job_details
are entirely JSON / python dictionary, conceivable they could be stored separate in Mongo instead of serializing/deserializing to SQL Job table. But, the disconnect continues. The logical extension might be moving the Job model -- and conceivably all? -- to Mongo. Editing JSON, with the nice JSON editor, is advanced, but doable, where GUI forms do not exist for modifying SQL columns.