MI-DPLA / combine

Combine /kämˌbīn/ - Metadata Aggregator Platform
MIT License
26 stars 11 forks source link

Consider moving job_details or Job to Mongo #283

Open ghukill opened 6 years ago

ghukill commented 6 years ago

As the job_details, also referred to as "Job parameters" become more important for re-running and editability, it's becoming uncomfortable what flags are set for a Job in SQL, and which are outlined in job_details.

For example, a Job includes the boolean SQL column published. However, this is also outlined in job_details under the published in such a way that it can be edited before a re-run. Spark flags off the job_details for re-publishing/unpublishing.

Now that job_details are entirely JSON / python dictionary, conceivable they could be stored separate in Mongo instead of serializing/deserializing to SQL Job table. But, the disconnect continues. The logical extension might be moving the Job model -- and conceivably all? -- to Mongo. Editing JSON, with the nice JSON editor, is advanced, but doable, where GUI forms do not exist for modifying SQL columns.