Closed jtopjian closed 3 years ago
Revisiting this after a week or so, I'm leaning on the following:
Allow updates to anything LXD wants to update. Discretion is left to the user.
This is the easiest way to handle things since anything in config
would just be updated. "User defined" keys would easily be handled here, too.
Export a compiled collection of configuration (combining all profile configuration) into an exported, read-only attribute of either container_config
or compiled_config
.
In what cases are there computed container configuration items (you environment.FOO
above)? Is that when profile config gets merged in?
My feeling at this stage is that, unless the configuration value alters the way Terraform needs to behave during CRUD operations, we should have a light touch and just proxy the users desires through to LXD - but I'll experiment a little to better understand some of the use cases.
In what cases are there computed container configuration items (you environment.FOO above)? Is that when profile config gets merged in?
Exactly. I see this being useful primarily for provisioning and having access to the "user" and "environment" keys. For example user.role = mysql
or a set of environment variables for some application.
we should have a light touch and just proxy the users desires through to LXD
This is my feeling as well.
I've played around with a few variations of this and I'm kind of "meh" with the results.
A lot of times, internal container configuration as well as LXD server configuration (for example, server interface names) are mixed in with the compiled results. I'm don't think exposing that info is either useful or a good idea.
I think I'll take a step back from this for a while and see if I run into a situation that clarifies the need for this a bit more. Perhaps this can be left open as an ongoing discussion on the topic?
One area I'm starting to play around with, in relation to this, is the cloud-init config key:
user.user-data
On the one hand there is no reason to extract this key as separate resource attribute, but on the other hand I feel it would be clearer if the lxd_container
resource had an attribute like:
resource "lxd_container" "foo" {
user_data = "${data.template_cloudinit_config.config.rendered}"
}
to make it more intuitive and similar to the aws_instance
/ openstack_compute_instance_v2
resources.
https://www.terraform.io/docs/providers/aws/r/instance.html#user_data
https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#user_data
Also, in future if we do decide to allow config
changes on existing containers, then we can enforce a recreate cycle on resources where the user_data
changes.
I'm starting to see the benefits in general of extracting several of the config categories into separate resource attributes. It would allow us to better match the LXD config keys that can be updated realtime vs those that require restart / recreate.
Also, for the likes of limits.
namespace, it would allow us to provide validation and early errors of invalid formatting.
I am using TF v 0.9.6 and lxd provider 0.9.5beta1
I am all for allowing dynamic updates of things which LXD allows. (Example limits.cpu, user.custom.key) Whether the software inside the LXD can/cannot handle such changes should be left up to the user.
I do have a real use case where I may need to re-balance the CPU/Memory of running LXDs without recreating them. (they are performing long-running tasks).
Short of having some sort of WebGUI (like portainer for Docker) on the LXD server side, which can interact outside of Terraform, I think allowing dynamic from TF would be a big (big) plus.
Regards, Shantanu
@jtopjian I'm kind of in the boat, of let the provider allow it, if the api allows it. Then what can or shouldn't be changed can be maintained just by lxc/lxd, and you don't have to constantly update or test anytime something new comes along. Little bit of caveat emptor. But I do like the idea of exposing the cloud-init parameters as a separate attribute, ie;, user-data, network-config, vendor-data, etc in keeping with other cloud providers. Would make my life easier and would certainly use them. Maybe take a look at terraform cloud-init template and see if that might fit in. I haven't used that yet, but getting to the point where stacking configs might come into play.
I think first-class cloud-init parameter(s) are certainly worthwhile. I can investigate this at some point if someone doesn't get to it before me (I'm a little swamped with some other projects at the moment).
@jtopjian Now if you really wanted to try something cool, make it possible to use terraform to help create the user.user-data, user.network-config.
@jtopjian Something like the following in json section would be for network-config. The provider could convert the json/hcl to yaml and assigned to user.network.config. Example site, http://www.configformats.com/
{
"version": 1,
"config": [
{
"name": "eth0",
"type": "physical"
},
{
"type": "nameserver",
"address": [
"172.28.0.254"
],
"search": [
"example.org",
"example.local"
]
},
{
"name": "eth0.2800",
"subnets": [
{
"address": "172.28.0.10/24",
"control": "auto",
"gateway": "172.28.0.254",
"type": "static"
}
],
"type": "vlan",
"vlan_id": 2800,
"vlan_link": "eth0"
},
{
"name": "eth0.1000",
"subnets": [
{
"address": "10.28.0.10/12",
"control": "auto",
"type": "static"
}
],
"type": "vlan",
"vlan_id": 1000,
"vlan_link": "eth0"
}
]
}
@jtopjian I can't code in go, but if you write it, I"m glad to test. I have code already that could feed it.
took a couple ideas from here, started writing new templates and metadata for the containers :)
Hi. Is there a plan to add some option to disable destroy container when config updated?
container is running could be dangerous.
@jtopjian Could you give possibility to choose? In my case more prefer to do not destroy LXD container with database. LXD is not Docker. It long term containers.
A have added this options and my container have been destroyed, that is really dangerous!
config = {
"security.nesting" = true
"security.privileged" = true
}
@atomlab I can certainly review and merge a Pull Request for this. But I have not had time to look at implementing this myself.
A have added this options and my container have been destroyed, that is really dangerous!
I do sympathize with having a resource destroyed when it should not have been. Terraform should have notified you that the resource would have been destroyed and if you wanted to proceed.
As a workaround, you could use the lxc
command-line tool to apply these changes to the running container.
@jtopjian Thanks a lot for you answer! terraform plan
warns about that the resource would have been destroyed. I just wasn't expect this behaviour. Recreate lxd containers every time is not comfortable when config changed.
As a workaround, you could use the lxc command-line tool to apply these changes to the running container. Yes, it works, but it additional step needed ssh on server instead of use LXD api only.
I understand status of this issue now. Thank you!
I've just merged #227 which should take care of this. It'll be available in the next release.
Please let me know if anyone runs into any issues.
All LXD resources can take a set of key/value pairs for "configuration". LXD supports a large, though finite, amount of configuration.
AFAICT, LXD allows all configuration to be updated in real-time. I haven't made any of the
config
blocks able to be updated because, IMO, updating something likelimits.cpu
while a container is running could be dangerous.Thoughts:
I'm not a huge fan of this because keys are stored within LXD as
limits.cpu
verbatim. I think doing this would be overmodeling things.or keep
user
andenvironment
inconfig
, but only allow those keys to be updated:I think the above would wreck havoc with the Terraform state...
Should
lxd_container.foo
have access toenvironment.FOO
? This might be useful for provisioning.Merging exported/computed configuration with user-specified configuration might be a bear. Perhaps a better option is to separate input and output. For example, user-defined configuration goes in
config
, but all configuration is available in a read-only exportedcontainer_config
block.