Currently, the python script which copies metadata across accounts is running locally. We need to decide and agree on where it's best to deploy it and conditions under which to run it.
Proposal
How do we run it?
Where we run it from
* CaDeT workflow
* Separate workflow
*Linked only to relevant metadata (we can use the extracted metadata file)
* Take a stab at guessing timing and do it then. (lambda)
* Means might not be current
* Other?
Context
Follows on from #5859
Currently, the python script which copies metadata across accounts is running locally. We need to decide and agree on where it's best to deploy it and conditions under which to run it.
Proposal
Spike requirements
Data Engineer 3 das
Definition of Done