Closed lennart closed 4 months ago
ah, I realize, in current master this is only the case for agents (control planes pass this on...) is this intentional?
also, when specifying zram on a nodepool that has a nodes map, even with the suggested change one still has to explicitly configure zram_size for every node in the mapping, otherwise the nodepool setting is overridden with the default for each node (which is ""
):
agent_nodepools = [
{
name = "agent-small",
server_type = "cx21",
location = "fsn1",
labels = [],
taints = [],
zram_size = "2G"
nodes = {
"1" : {
append_index_to_node_name = false,
location = "nbg1",
labels = [
]
},
"20" : {
append_index_to_node_name = false,
labels = [
]
}
}
longhorn_volume_size = 0
kubelet_args = ["runtime-request-timeout=10m0s"]
},
]
I would expect, that if I do not specify zram_size of a node in the mapping, it would use the value specified in the pool (maybe one could use a different default for the nodes in the mapping that is considered unset?)
so currently I ended up with:
agent_nodepools = [
{
name = "agent-small",
server_type = "cx21",
location = "fsn1",
labels = [],
taints = [],
nodes = {
"1" : {
append_index_to_node_name = false,
location = "nbg1",
zram_size = "2G"
labels = [
]
},
"20" : {
append_index_to_node_name = false,
zram_size = "2G"
labels = [
]
}
}
longhorn_volume_size = 0
kubelet_args = ["runtime-request-timeout=10m0s"]
},
]
and the change in agents.tf that passes on the zram value
@lennart You did well, just merged the PR, we do not want to block this possibility if it's just one line away. Thanks for this.
@mysticaltech thanks!
Description
the agent_nodepools/control_plane_nodes option for
zram_size
is not passed on to the actual host resource, therefore zram swap is not configured (although the corresponding files are in place, they are just not activated)I guess,
zram_size = each.value.zram_size
has to be appended right after swap_size in the corresponding files for agents and control_planesKube.tf file
Screenshots
No response
Platform
Linux