Looking for an option to set up deployments in a way that will allow for local (to the environment) runners to exec terraform commands and arch changes.
This ideally would enable a more streamlined deployment path, ensuring that all executions are sourced from one place, instead of various developers or external application execution. Currently, when run outside of the environment, the person executing the commands will not be local to the network, meaning that direct access to things like the database internals are not able to be defined / configured in terraform. Granted there are workarounds like VPN, and temporary pin-holing of the firewall, but this may not be consistent, may not always be available, and in some cases may be against policy. If the commands were run within the environment itself, terraform would have more visibility and access to ensure things are correctly set up. It would also enable an easier path to regular drift monitoring.
If this is to be implemented, we'd need to ensure that we're looking heavily at security posture, and locking down access to the runner, as well as explicitly limiting what it can and cannot access. The ideal situation is such that only very specific people would be able to touch it, but that otherwise, the only path is through automation. If misconfigured, this setup could allow for "very bad things", so we need to be extremely cautious and proactive about security at every step using this approach.
This is a more complex configuration than we already have, and more complex than some will require. If you are just serving some stuff up from S3, maybe this isn't something you need. If you have a very complex environment with multiple databases, queues, ETL tools, etc then it would likely make more sense.
If this gets implemented, it should be an OPTIONAL component / deployment configuration.
Looking for an option to set up deployments in a way that will allow for local (to the environment) runners to exec terraform commands and arch changes.
This ideally would enable a more streamlined deployment path, ensuring that all executions are sourced from one place, instead of various developers or external application execution. Currently, when run outside of the environment, the person executing the commands will not be local to the network, meaning that direct access to things like the database internals are not able to be defined / configured in terraform. Granted there are workarounds like VPN, and temporary pin-holing of the firewall, but this may not be consistent, may not always be available, and in some cases may be against policy. If the commands were run within the environment itself, terraform would have more visibility and access to ensure things are correctly set up. It would also enable an easier path to regular drift monitoring.
If this is to be implemented, we'd need to ensure that we're looking heavily at security posture, and locking down access to the runner, as well as explicitly limiting what it can and cannot access. The ideal situation is such that only very specific people would be able to touch it, but that otherwise, the only path is through automation. If misconfigured, this setup could allow for "very bad things", so we need to be extremely cautious and proactive about security at every step using this approach.
This is a more complex configuration than we already have, and more complex than some will require. If you are just serving some stuff up from S3, maybe this isn't something you need. If you have a very complex environment with multiple databases, queues, ETL tools, etc then it would likely make more sense.
If this gets implemented, it should be an OPTIONAL component / deployment configuration.