Montana / terraform-travis-public

Terraform policies with Chirp for Travis CI
4 stars 1 forks source link

Terraform internal variable #2

Closed EvilOtto2 closed 1 year ago

EvilOtto2 commented 1 year ago

Hi @Montana,

I have this question that I am not sure how to formulate, so I was hoping you could help. To best formulate it, is me using an example. I have the following resource definition for was subnets based in an input parameter variable:

resource "aws_subnet" "monitoring_subnetwork" {
  count = length(var.monitoring_subnets)

  vpc_id     = module.vpc.vpc_id
  cidr_block = var.monitoring_subnets[count.index]

  availability_zone= "${data.aws_availability_zones.available.names[count.index % length(data.aws_availability_zones.available.names)]}"

  tags = {
    Name = "Monitoring private-1${replace(
      data.aws_availability_zones.available.names[count.index % length(data.aws_availability_zones.available.names)], 
      data.aws_availability_zones.available.id, "")}"
  }
}

I want to simplify this code to make it more readable and maintainable.

Montana commented 1 year ago

Hi @EvilOtto2,

If I were you, I would use a count.index to get an availability zone using round-robin, based on index % len_of_array, and the result of this mod is calculated twice (in other cases, even three times).

You could define an internal Terraform variable inside the resource that looks something like this:

zone_index = count.index % length(data.aws_availability_zones.available.names)

Then proceed to reuse this index in the parts of the code where this operation is repeated. For example, a sample .tf file I've created for you, so it's easier to help formulate your question is below; I just used OpenStack as an example:

resource "openstack_lb_monitor_v1" "monitor_1" {
  type           = "TCP"
  delay          = 30
  timeout        = 5
  max_retries    = 3
  admin_state_up = "true"
}

resource "openstack_lb_pool_v1" "pool_1" {
  name        = "pool_1"
  protocol    = "TCP"
  subnet_id   = openstack_networking_subnet_v2.subnet_1.id
  lb_method   = "ROUND_ROBIN"
  monitor_ids = ["${openstack_lb_monitor_v1.monitor_1.id}"]
}

resource "openstack_lb_member_v1" "member_1" {
  pool_id = openstack_lb_pool_v1.pool_1.id
  address = openstack_compute_instance_v2.web-server-1.access_ip_v4
  port    = 80
}

The part you want to pay attention to is the load balancing type in your config, specifically setting it to round_robin:

  load_balancing_algorithm_type = "round_robin"
  slow_start                    = 30 # 30 seconds
  target_type                   = "instance"

enable_cross_zone_load_balancing - (Optional) If true, cross-zone load balancing of the load balancer will be enabled. This is a network load balancer feature. Defaults to false.

Cheers, Montana Mendy

Northskool commented 1 year ago

if he doesn't have to use zone_index in your aws_subnet "monitoring_subnetwork , so where would you use it?

EvilOtto2 commented 1 year ago

Good question, what do you think @Montana? The solution you provided helped me with Round-Robinning the DNS, but what about zoning the subnet in AWS?

Montana commented 1 year ago

Hi Evil Otto,

You can prepare a map with correct values in your locals with a subnet as a key and a zone as a value, and use for_each instead of count after that. That should answer your question. For any kind of table-driven configuration it is absolutely must have.

EvilOtto2 commented 1 year ago

@Montana if I setup a WLC I can put different loads on different servers, in the case of circular order petition signing, would this make better sense? Thanks for all your help Montana.

Otto.

Montana commented 1 year ago

Hey Otto,

What you're describing is Weighted Least Connection, which is an excellent way to schedule an algorithm. You can be aggressive load-balancing on the WLC, which allows the LAPs to load-balance wireless clients across APs in an LWAPP system.

The traffic is then distributed again to destination containers using iptables destination network address translation rules in a round-robin manner. The problem happens in the environment with a load balancer that is not supported by Kubernetes, e.g., in an on-premise data center with a bare metal load balancer. In such environments, the user must manually configure the static route for inbound traffic in an ad-hoc manner. Since Kubernetes fails to provide a uniform climate from a container cluster viewpoint, migrating container clusters among the different environments will always be a burden so that it will look much like this flow chart:

Screenshot 2022-11-22 at 10 52 25 PM

If you're looking into WLC, this probably means you're also looking into distributed packet management. Recently, the performance of CPUs is improved significantly due to the development of multi-core CPUs. One of the top-of-the-line server processors from Intel now includes up to 28 cores in a single CPU. To enjoy the benefits of multi-core CPUs in communication performance, it is necessary to distribute the handling of interrupts from the NIC and the IP protocol processing to the available physical cores. Receive Side Scaling is a technology to distribute the handling of the interrupt from NIC queues to multiple CPU cores. Subsequently, Receive Packet Steering distributes the IP protocol processing to multiple CPU cores by issuing inter-core software interrupts; this is displayed in this flow chart below:

Screenshot 2022-11-22 at 10 54 34 PM

Just doing a simple test running:

RSS=on
echo 1 > /proc/irq/82/smp_affinity
echo 2 > /proc/irq/83/smp_affinity
echo 4 > /proc/irq/84/smp_affinity
echo 8 > /proc/irq/85/smp_affinity

You can see the performance:

Screenshot 2022-11-22 at 10 57 16 PM

This can get complex when specifically needing to choose quickly and down to the wire; I hope this was of help.

Cheers, Montana Mendy

EvilOtto2 commented 1 year ago

Thanks @Montana for really putting that into perspective for me. The K8S cluster setting I'm currently using can be adjusted inside Ansible inventory/my-cluster/group_vars folder. I'm thinking of using the flannel network, is this feasible - and if so what configurables must I change? Sorry for all the questions @Montana, you'e been really helpful.

Montana commented 1 year ago

Hi Otto,

In regards to you using the flannel network, you'll need to change this setting flannel_interface to the host-only interface from fileinventory/my-cluster/group_vars/k8s-cluster/k8s-net-flannel.yml . It's important you check your host-only interface and put it there. You should look into looking at the /network_plugin/flannel/defaults/main.yml.

You should see: roles/network_plugin/flannel/defaults/main.yml. This is the interface that should be used for flannel operations, it's an inventory cluster-level item so might be what you're looking for, now let's get into Virtual Machines.

Each VM has two interfaces, NAT and a host-only network. NAT is only for internet access. The communication between VMs must use the host-only network ( 192.168.56.0/24 ). Without this interface setting your VMs will pick the NAT network and the cluster will not work.