aws-samples / containers-blog-maelstrom

MIT No Attribution
48 stars 41 forks source link

cis-bottlerocket-validating-script issue #73

Closed ChiefHolland closed 8 months ago

ChiefHolland commented 1 year ago

DM-VERITY Validation Error

Working

38 verity_on=$(grep -Fw "dm-mod.create=root,,,ro,0" /proc/cmdline | awk '{print $18}')
39 restart_on_corrupt=$(grep -Fw "dm-mod.create=root,,,ro,0" /proc/cmdline | awk '{print $29}')

Not Working

38 verity_on=$(grep -Fw "dm-mod.create=root,,,ro,0" /proc/cmdline | awk '{print $20}')
39 restart_on_corrupt=$(grep -Fw "dm-mod.create=root,,,ro,0" /proc/cmdline | awk '{print $31}')

elamaran11 commented 1 year ago

@ChiefHolland Do you mind submitting a PR.

adnankoroth commented 11 months ago

Hi, I have got a similar issue. In my environment, I've my bottle rocket instances where I've followed the AWS blog post and Bottlerocket configuration settings are passed through to the managed nodes through user data.

When I SSM into the instances and run the validating script, it says 26/26 checks passed. However, running them as Kubernetes job will fail 2 checks (1.3.1 and 3.4.1.3).

I tried the following but no favorable results, a. Having the job run on specific node and in parallel SSM into the node and see if there is any difference. b. Gave elevated privileges to the container and even let the container share the host's namespace.

Any help would be much appreciated.

nike21oct commented 9 months ago

hi@adnankoroth and all , need your help in one of the issue is I am having EKS cluster running by using AMI bottlerocket and I am doing the CIS benchmarking of bottlerocket AMI by following a document shared by AWS "https://aws.amazon.com/blogs/containers/validating-amazon-eks-optimized-bottlerocket-ami-against-the-cis-benchmark/" In these thee is a procedure of IP tables rules change and when I implement IP table rules my nginx ingress controller pod went to crashloopback off and I cannot access my application from outside the cluster.nginx as a ingress controller and when i implemented these IP tables rules by bootstrap-container my application stop open from outside the cluster , i mean my ingress is not functioning my nginx ingress controller pod went to crashloopbackoff , ngix controller load balancer in aws in target group the Protocol : Port is TLS: 32443 and health check is using protocol http and port is 32002, so what should i need to do here ? can anyone please help me here?

elamaran11 commented 9 months ago

@ajpaws This is related to your artifact, pls take care of this

ajpaws commented 9 months ago

@nike21oct did you try opening additional ports 32002/32443 required for your application? The blog only open a kubelet port for a simple kubectl exec/log usecase for reference.

nike21oct commented 9 months ago

hi @ajpaws , yes I tried but it did not get success . Below is the IP tables rules which I implemented from userdata from bootstrap container.

!/usr/bin/env bash

Flush iptables rules iptables -F

3.4.1.1 Ensure IPv4 default deny firewall policy (Automated) iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP

Allow inbound traffic for kubelet (so kubectl logs/exec works) iptables -I INPUT -p tcp -m tcp --dport 10250 -j ACCEPT

Adding nodeport of nginx ingress controlleer

iptables -I INPUT -p tcp -m tcp --dport 32443 -j ACCEPT # For TLS traffic iptables -I INPUT -p tcp -m tcp --dport 32002 -j ACCEPT # For Health Checks iptables -I INPUT -p tcp -m tcp --dport 32080 -j ACCEPT

3.4.1.2 Ensure IPv4 loopback traffic is configured (Automated) iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -s 127.0.0.0/8 -j DROP

3.4.1.3 Ensure IPv4 outbound and established connections are configured (Manual) iptables -A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p udp -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p icmp -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p icmp -m state --state ESTABLISHED -j ACCEPT

Flush ip6tables rules ip6tables -F

3.4.2.1 Ensure IPv6 default deny firewall policy (Automated) ip6tables -P INPUT DROP ip6tables -P OUTPUT DROP ip6tables -P FORWARD DROP

Allow inbound traffic for kubelet on ipv6 if needed (so kubectl logs/exec works) ip6tables -A INPUT -p tcp --destination-port 10250 -j ACCEPT

3.4.2.2 Ensure IPv6 loopback traffic is configured (Automated) ip6tables -A INPUT -i lo -j ACCEPT ip6tables -A OUTPUT -o lo -j ACCEPT ip6tables -A INPUT -s ::1 -j DROP

3.4.2.3 Ensure IPv6 outbound and established connections are configured (Manual) ip6tables -A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -j ACCEPT ip6tables -A OUTPUT -p udp -m state --state NEW,ESTABLISHED -j ACCEPT ip6tables -A OUTPUT -p icmp -m state --state NEW,ESTABLISHED -j ACCEPT ip6tables -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT ip6tables -A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT ip6tables -A INPUT -p icmp -m state --state ESTABLISHED -j ACCEPT

after implementing these my nginx ingress controller pod going in crashloopbackoff

Adding nodeport of nginx ingress controlleer

iptables -I INPUT -p tcp -m tcp --dport 32443 -j ACCEPT # For TLS traffic iptables -I INPUT -p tcp -m tcp --dport 32002 -j ACCEPT # For Health Checks iptables -I INPUT -p tcp -m tcp --dport 32080 -j ACCEPT

stockholmux commented 9 months ago

@nike21oct Are you using anything different than the standard setup? Different CNI maybe? kube-proxy in IPVS rather than iptables mode?

nike21oct commented 9 months ago

hi@stockholmux, yes we are using standard setup only, the CNI we are using is cilium and AMI we are using is bootlerocket

nike21oct commented 9 months ago

hi @ajpaws , please guide me on this how i can solve this isuue?

nike21oct commented 9 months ago

hi everyone , please guide me on this how i can solve this issue?

ajpaws commented 8 months ago

@nike21oct

I deployed 2 simple K8s service apps on CIS BR Node exposing via NLB one using IP Target group mode and other using Instance mode. Both of them healthy behind NLB. I am using VPC CNI. 1st app using IP mode by passes kube-proxy as NLB talks to Pod IP directly so no Issues. For 2nd app, kube-proxy setups Iptables for node port so NLB can access it

bash-5.1# iptables -L KUBE-NODEPORTS -n -v -t nat Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 753 45160 KUBE-EXT-4RKEGJ5SIFR4RAZ2 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* ns-nlb-test1/nlb-test:http-web */ tcp dpt:31532 0 0 KUBE-EXT-O3B3AN7S2LB2TE5R 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* ns-nlb-test/nlb-test:http-web */ tcp dpt:30893

In the above o/p, 31532 is nodeport for 2nd app.

In general I would recommend using IP mode for Target group.

nike21oct commented 8 months ago

hi @ajpaws , thanks for the response I have few question 1>Did you tried to implement bootstrap container to implement benchmarking with bootstrap container and how you are allowing nodeport 31532 in your ip tables rule? 2> I am using instance mode in my setup so how i can allow my nodeport in iptables rule so that it should work in instance mode? I am allowing node port by using below command and it is not working . iptables -I INPUT -p tcp -m tcp --dport 32443 -j ACCEPT # For TLS traffic iptables -I INPUT -p tcp -m tcp --dport 32002 -j ACCEPT # For Health Checks iptables -I INPUT -p tcp -m tcp --dport 32080 -j ACCEPT

how i can implement this using instance mode?

ajpaws commented 8 months ago

@nike21oct

  1. I didnt make any extra changes and used same bootstrapping container used in the blog. The kube-proxy is adding these extra rules automatically,
  2. I guess you are using Cilium which is different from my setup. Can you pls try with VPC CNI and IP mode / Instance mode?
nike21oct commented 8 months ago

hi @ajpaws , yes you are correct we are using Cilium as a CNI and our eks cluster is running in instance mode and it is difficult to change this method , so is it possible that it should work in instance mode and with cilium CNI ?

one more question when you tried with instance mode (this means that the NLB sends traffic to the nodes (EC2 instances) themselves, and then the Kubernetes networking (like kube-proxy) is responsible for routing the traffic to the appropriate pods running on those nodes.) how you have allowed node port of ingress controller service in iptable rules as per the document botstrap container drop all the traffic?

ajpaws commented 8 months ago

@nike21oct I did not make any changes to bootstrap container explicitly. Instead the kube-proxy is adding iptables rules automatically to allow node port as I mentioned above.

Also modified both bootstrap and validation container Dockerfile now to move iptables backend from nftables to legacy with https://github.com/aws-samples/containers-blog-maelstrom/pull/116.

Regd any support with cilium based setup, I would recommend open a issue in the respective repo to get the right support.

I am closing this Issue since both IP mode and instance modes are working with default VPC CNI, which is the scope for the blog.