aws / aws-network-policy-agent

Apache License 2.0
43 stars 27 forks source link

Network policy blocks established connections to STS. #73

Open wiseelf opened 11 months ago

wiseelf commented 11 months ago

What happened: I have cli script in one namespace and I applied this network policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  ingress: []

Script is PHP application that use aws-php-sdk. It uses service account to assume role to access s3 bucket. We also have interface endpoint for STS. After I applied that policy I see that my container is stuck and cannot assume role.

strace:

 # strace -p 1
strace: Process 1 attached
restart_syscall(<... resuming interrupted read ...>) = 0
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, NULL, 8) = 0
poll([{fd=7, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, NULL, 8) = 0
poll([{fd=7, events=POLLIN}], 1, 1000)  = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, NULL, 8) = 0
poll([{fd=7, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 0 (Timeout)
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f30ea728c83}, NULL, 8) = 0
poll([{fd=7, events=POLLIN}], 1, 1000^Cstrace: Process 1 detached
 <detached ...>

lsof:

/ # lsof -p 1
COMMAND PID   USER   FD   TYPE             DEVICE SIZE/OFF     NODE NAME
php       1 nobody  cwd    DIR              0,636     4096   393047 /usr/local/parser
php       1 nobody  rtd    DIR              0,636     4096  1087892 /
php       1 nobody  txt    REG              0,636 18892400   394323 /usr/local/bin/php
php       1 nobody  mem    REG             259,16            394323 /usr/local/bin/php (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            395012 /usr/lib/libzstd.so.1.5.2 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            395008 /usr/lib/libbz2.so.1.0.8 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            395010 /usr/lib/libzip.so.5.4 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            395025 /usr/local/lib/php/extensions/no-debug-non-zts-20210902/zip.so (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394310 /usr/lib/libsodium.so.23.3.0 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394858 /usr/local/lib/php/extensions/no-debug-non-zts-20210902/sodium.so (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            395024 /usr/local/lib/php/extensions/no-debug-non-zts-20210902/pdo_mysql.so (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393910 /usr/lib/libbrotlicommon.so.1.0.9 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393912 /usr/lib/libbrotlidec.so.1.0.9 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393920 /usr/lib/libnghttp2.so.14.21.2 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393918 /usr/lib/liblzma.so.5.2.5 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394302 /usr/lib/libncursesw.so.6.3 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394291 /usr/lib/libargon2.so.1 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394304 /usr/lib/libonig.so.5.3.0 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393916 /usr/lib/libcurl.so.4.8.0 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393235 /lib/libz.so.1.2.12 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394312 /usr/lib/libsqlite3.so.0.8.6 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393232 /lib/libcrypto.so.1.1 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393233 /lib/libssl.so.1.1 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394314 /usr/lib/libxml2.so.2.9.14 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394298 /usr/lib/libiconv.so.2.6.1 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            394308 /usr/lib/libreadline.so.8.1 (stat: Operation not permitted)
php       1 nobody  mem    REG             259,16            393229 /lib/ld-musl-x86_64.so.1 (stat: Operation not permitted)
php       1 nobody    0u   CHR                1,3      0t0        6 /dev/null
php       1 nobody    1w  FIFO               0,13      0t0 10593738 pipe
php       1 nobody    2w  FIFO               0,13      0t0 10593739 pipe
php       1 nobody    3r   REG              0,636     3299   395550 /usr/local/parser/sftp.php
php       1 nobody    4u  sock                0,8      0t0 10593987 protocol: TCP
php       1 nobody    5u  unix 0x0000000000000000      0t0 10594514 type=STREAM (CONNECTED)
php       1 nobody    6u  unix 0x0000000000000000      0t0 10594515 type=STREAM (CONNECTED)
php       1 nobody    7u  IPv4           10594520      0t0      TCP parser-cronjob-sftp-download-files-28256520-25p9z:59598->ip-10-1-12-220.ec2.internal:https (ESTABLISHED)

As you can see connection is in ESTABLISHED state. Here is the logs from the instance:

root@admin]# grep "10.1.201.3" network-policy-agent.log | grep "10.1.12.220"
{"level":"info","timestamp":"2023-09-22T13:30:04.299Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.201.3","Src Port":34454,"Dest IP":"10.1.12.220","Dest Port":443,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","timestamp":"2023-09-22T13:35:10.799Z","logger":"ebpf-client","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.201.3 Source port - 34454 Dest IP - 10.1.12.220 Dest port - 443 Protocol - 6"}
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:03.853Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:04.075Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:04.503Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:05.393Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:07.163Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:10.593Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:17.723Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:31.793Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:59.323Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:01:05.811Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}

But from inside the container new command that tests connection to s3 works well:

{"level":"info","timestamp":"2023-09-23T12:00:44.469Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.201.3","Src Port":52301,"Dest IP":"172.20.0.10","Dest Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
{"level":"info","timestamp":"2023-09-23T12:00:44.470Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.201.3","Src Port":53862,"Dest IP":"10.1.12.220","Dest Port":443,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","timestamp":"2023-09-23T12:00:44.696Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.201.3","Src Port":51649,"Dest IP":"172.20.0.10","Dest Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
{"level":"info","timestamp":"2023-09-23T12:00:44.698Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.201.3","Src Port":52654,"Dest IP":"52.216.214.57","Dest Port":443,"Proto":"TCP","Verdict":"ACCEPT"}

Basically 4 requests: DNS -> STS -> DNS -> S3

This is container IP: 10.1.201.3 And this is STS interface IP: 10.1.12.220

And if I remove that network policy everything works well again. Any ideas?

Environment:

Kubernetes version (use kubectl version): Server Version: v1.27.4-eks-2d98532 CNI Version: v1.15.0-eksbuild.2 OS (e.g: cat /etc/os-release): bottlerocket $ cat /etc/os-release NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/" Kernel (e.g. uname -a): Linux ip-10-1-193-17.ec2.internal 5.15.128 aws/amazon-vpc-cni-k8s#1 SMP Thu Sep 14 21:42:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

wiseelf commented 11 months ago

php 1 nobody 7u IPv4 10594520 0t0 TCP parser-cronjob-sftp-download-files-28256520-25p9z:59598->ip-10-1-12-220.ec2.internal:https (ESTABLISHED) And the interesting part that I do not see the initial connection being logged.

[root@admin]# grep "59598" network-policy-agent.log
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:03.853Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:04.075Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:04.503Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:05.393Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:07.163Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:10.593Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:17.723Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:31.793Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:59.323Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:01:05.811Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
[root@admin]#

So it looks like that network-policy-agent has missed initial connection which is established already and from firewall perspective that is a new ingress traffic which it is successfully blocking.

log file has records dated from Sep 19th:

[root@admin]# head -n 1 network-policy-agent.log
{"level":"info","timestamp":"2023-09-19T11:52:16.067Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8162"}

and the pod is in running state only 22hours:

parser-cronjob-sftp-download-files-28256520-25p9z   1/1     Running   0          22h     10.1.201.3     ip-10-1-202-8.ec2.internal   <none>           <none>
wiseelf commented 11 months ago

ipamd.log

{"level":"debug","ts":"2023-09-22T14:00:00.726Z","caller":"rpc/rpc.pb.go:713","msg":"AddNetworkRequest: K8S_POD_NAME:\"parser-cronjob-sftp-download-files-28256520-25p9z\"  K8S_POD_NAMESPACE:\"parser\"  K8S_POD_INFRA_CONTAINER_ID:\"3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b\"  ContainerID:\"3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b\"  IfName:\"eth0\"  NetworkName:\"aws-cni\"  Netns:\"/var/run/netns/cni-ed3e971d-3e36-02c8-2112-3322c22753b8\""}
{"level":"info","ts":"2023-09-22T14:00:00.726Z","caller":"rpc/rpc.pb.go:713","msg":"Send AddNetworkReply: IPv4Addr 10.1.206.120, IPv6Addr: , DeviceNumber: 0, err: <nil>"}
{"level":"debug","ts":"2023-09-22T14:00:00.726Z","caller":"datastore/data_store.go:663","msg":"AssignIPv4Address: IP address pool stats: total 48, assigned 10"}
{"level":"debug","ts":"2023-09-22T14:00:00.726Z","caller":"datastore/data_store.go:740","msg":"Returning Free IP 10.1.201.3"}
{"level":"debug","ts":"2023-09-22T14:00:00.726Z","caller":"datastore/data_store.go:663","msg":"New IP from CIDR pool- 10.1.201.3"}
{"level":"info","ts":"2023-09-22T14:00:00.726Z","caller":"datastore/data_store.go:767","msg":"AssignPodIPv4Address: Assign IP 10.1.201.3 to sandbox aws-cni/3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b/eth0"}
{"level":"debug","ts":"2023-09-22T14:00:00.727Z","caller":"rpc/rpc.pb.go:713","msg":"VPC CIDR 10.1.0.0/16"}
{"level":"info","ts":"2023-09-22T14:00:00.727Z","caller":"rpc/rpc.pb.go:713","msg":"Send AddNetworkReply: IPv4Addr 10.1.201.3, IPv6Addr: , DeviceNumber: 0, err: <nil>"}

plugin.log

{"level":"info","ts":"2023-09-22T14:00:00.725Z","caller":"routed-eni-cni-plugin/cni.go:126","msg":"Received CNI add request: ContainerID(3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b) Netns(/var/run/netns/cni-ed3e971d-3e36-02c8-2112-3322c22753b8) IfName(eth0) Args(K8S_POD_INFRA_CONTAINER_ID=3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b;K8S_POD_UID=32d25049-333b-4b5c-bc46-706b09cea20b;IgnoreUnknown=1;K8S_POD_NAMESPACE=parser;K8S_POD_NAME=parser-cronjob-sftp-download-files-28256520-25p9z) Path(/opt/cni/bin) argsStdinData({\"cniVersion\":\"0.4.0\",\"mtu\":\"9001\",\"name\":\"aws-cni\",\"pluginLogFile\":\"/var/log/aws-routed-eni/plugin.log\",\"pluginLogLevel\":\"DEBUG\",\"podSGEnforcingMode\":\"strict\",\"type\":\"aws-cni\",\"vethPrefix\":\"eni\"})"}
{"level":"info","ts":"2023-09-22T14:00:00.729Z","caller":"routed-eni-cni-plugin/cni.go:126","msg":"Received add network response from ipamd for container 3a9720ae5faacff284f0f35e3ff7e9c96418d19e952ea4c31321d719c03e3f8b interface eth0: Success:true IPv4Addr:\"10.1.201.3\" VPCv4CIDRs:\"10.1.0.0/16\""}
{"level":"debug","ts":"2023-09-22T14:00:00.729Z","caller":"routed-eni-cni-plugin/cni.go:227","msg":"SetupPodNetwork: hostVethName=enia8cf87efd45, contVethName=eth0, netnsPath=/var/run/netns/cni-ed3e971d-3e36-02c8-2112-3322c22753b8, v4Addr=10.1.201.3/32, v6Addr=<nil>, deviceNumber=0, mtu=9001"}
{"level":"debug","ts":"2023-09-22T14:00:00.960Z","caller":"driver/driver.go:253","msg":"Successfully setup container route, containerAddr=10.1.201.3/32, hostVeth=enia8cf87efd45, rtTable=main"}
{"level":"debug","ts":"2023-09-22T14:00:00.970Z","caller":"driver/driver.go:253","msg":"Successfully setup toContainer rule, containerAddr=10.1.201.3/32, rtTable=main"}

network-policy-agent.log:

{"level":"info","timestamp":"2023-09-22T14:00:03.497Z","logger":"controllers.policyEndpoints","msg":"Found a matching Pod: ","name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace: ":"parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.574Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-transfer-queue-send-28256520-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.574Z","logger":"controllers.policyEndpoints","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","Pod ID: ":"parser-cronjob-transfer-queue-send-28256520-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.681Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-sftp-run-handlers-28256520-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.681Z","logger":"controllers.policyEndpoints","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","Pod ID: ":"parser-cronjob-sftp-run-handlers-28256520-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.682Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-sftp-download-files-28256520-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.682Z","logger":"ebpf-client","msg":"AttacheBPFProbes for","pod":"parser-cronjob-sftp-download-files-28256520-25p9z"," in namespace":"parser"," with hostVethName":"enia8cf87efd45"}
{"level":"info","timestamp":"2023-09-22T14:00:03.714Z","logger":"ebpf-client","msg":"Successfully attached Ingress TC probe for","pod: ":"parser-cronjob-sftp-download-files-28256520-25p9z"," in namespace":"parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.744Z","logger":"ebpf-client","msg":"Successfully attached Egress TC probe for","pod: ":"parser-cronjob-sftp-download-files-28256520-25p9z"," in namespace":"parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.744Z","logger":"controllers.policyEndpoints","msg":"Successfully attached required eBPF probes for","pod:":"parser-cronjob-sftp-download-files-28256520-25p9z","in namespace":"parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.744Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-sftp-download-files-28256460-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.744Z","logger":"controllers.policyEndpoints","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","Pod ID: ":"parser-cronjob-sftp-download-files-28256460-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.745Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-sftp-download-files-28256470-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.745Z","logger":"controllers.policyEndpoints","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","Pod ID: ":"parser-cronjob-sftp-download-files-28256470-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.746Z","logger":"controllers.policyEndpoints","msg":"Processing Pod: ","name:":"parser-cronjob-sftp-download-files-28256520-25p9z","namespace:":"parser","podIdentifier: ":"parser-cronjob-sftp-download-files-28256510-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.746Z","logger":"controllers.policyEndpoints","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"parser-cronjob-sftp-download-files-28256520-25p9z","Pod ID: ":"parser-cronjob-sftp-download-files-28256510-parser"}
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","timestamp":"2023-09-22T14:00:03.812Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.201.3","Dest Port":59598,"Proto":"TCP","Verdict":"DENY"}

Pod info:

Containers:
  parser-cronjob:
    Container ID:  containerd://f446de04c67167b84b0fb9e44bea0c221ee0d0d2c1442c6660f5e0d635e9dab4
    Image:         xxx.dkr.ecr.us-east-1.amazonaws.com/yyy/parser:dev-d91a282b7d8c9edabdb5acdd6a5d424da77f9589
    Image ID:      xxx.dkr.ecr.us-east-1.amazonaws.com/yyy/parser@sha256:abe3d830df85eaf621c11713aeb614f286f1bc75a4fd727777ec3193f255cbce
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
    Args:
      -c
      php /usr/local/parser/sftp.php download-files
    State:          Running
      Started:      Fri, 22 Sep 2023 14:00:02 +0000

So pod was created at 14:00:00 went into running state at 14:00:02 network policy was attached at 14:00:03. It looks like all connections that were established between 14:00:02 and 14:00:03 is automatically blocked after network policy is applied.

I've added sleep 5 command before the main script. so far no issues. But I don't want to add sleep to all containers.

wiseelf commented 11 months ago

Any ideas how to fix it without adding sleep command?

wiseelf commented 10 months ago

same issue with the latest 1.0.5

wiseelf commented 9 months ago

1.0.6 still the same issue. All established connection are being denied after network policy is applied:

{"level":"info","ts":"2023-11-21T08:21:04.484Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.65.212","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":40114,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:05.936Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":39644,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:09.376Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":39644,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:16.336Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":39644,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:30.406Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":39644,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:51.186Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.1.32","Src Port":22,"Dest IP":"10.1.202.70","Dest Port":52082,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:21:57.926Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.12.220","Src Port":443,"Dest IP":"10.1.202.70","Dest Port":39644,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:23:37.682Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.1.32","Src Port":22,"Dest IP":"10.1.202.70","Dest Port":52082,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:25:02.785Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.8.149","Src Port":3306,"Dest IP":"10.1.202.70","Dest Port":60662,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:25:33.105Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.8.149","Src Port":3306,"Dest IP":"10.1.202.70","Dest Port":60662,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:25:38.514Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.1.32","Src Port":22,"Dest IP":"10.1.202.70","Dest Port":52082,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:26:03.185Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.8.149","Src Port":3306,"Dest IP":"10.1.202.70","Dest Port":60662,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-11-21T08:26:03.784Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.1.32","Src Port":22,"Dest IP":"10.1.202.70","Dest Port":52082,"Proto":"TCP","Verdict":"DENY"}
jayanthvn commented 9 months ago

@wiseelf -

Will you be able to try this image -

<account-number>.dkr.ecr.<region>.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7-rc3

Please make sure you replace the account number and region.

wiseelf commented 8 months ago

@jayanthvn i have the same issue:

 % k -n parser get pods -o wide
NAME                                                READY   STATUS    RESTARTS   AGE     IP             NODE                          NOMINATED NODE   READINESS GATES
parser-cronjob-transfer-queue-send-28372750-x99tz   1/1     Running   0          5m29s   10.1.206.228   ip-10-1-194-66.ec2.internal   <none>           <none>
{"level":"info","ts":"2023-12-12T07:00:02.818Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":49880,"Dest IP":"10.1.6.79","Dest Port":6379,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:03.731Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":56468,"Dest IP":"172.20.0.10","Dest Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:03.735Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":59708,"Dest IP":"10.1.8.149","Dest Port":3306,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:04.401Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":35421,"Dest IP":"172.20.0.10","Dest Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:04.439Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":49882,"Dest IP":"10.1.6.79","Dest Port":6379,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:04.711Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":59802,"Dest IP":"172.20.0.10","Dest Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:00:04.719Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.206.228","Src Port":49892,"Dest IP":"10.1.6.79","Dest Port":6379,"Proto":"TCP","Verdict":"ACCEPT"}
{"level":"info","ts":"2023-12-12T07:02:42.672Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 49880 Dest IP - 10.1.6.79 Destport - 6379 Protocol - 6 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.672Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 56468 Dest IP - 172.20.0.10 Dest port - 53 Protocol - 17 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.672Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 49892 Dest IP - 10.1.6.79 Destport - 6379 Protocol - 6 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.672Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 49882 Dest IP - 10.1.6.79 Destport - 6379 Protocol - 6 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.673Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 35421 Dest IP - 172.20.0.10 Dest port - 53 Protocol - 17 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.673Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 59708 Dest IP - 10.1.8.149 Dest port - 3306 Protocol - 6 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:02:42.673Z","logger":"ebpf-client","caller":"wait/backoff.go:227","msg":"Conntrack cleanup","Entry - ":"Expired/Delete Conntrack Key : Source IP - 10.1.206.228 Source port - 59802 Dest IP - 172.20.0.10 Dest port - 53 Protocol - 17 Owner IP - 10.1.206.228"}
{"level":"info","ts":"2023-12-12T07:10:03.071Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:03.279Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:03.487Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:03.919Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:04.751Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:06.415Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:09.779Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:11.248Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:16.431Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:29.743Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:10:56.627Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"52.217.10.156","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":60406,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:02.668Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:02.881Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:03.111Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:03.551Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:04.451Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:04.678Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:06.211Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:09.661Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:17.021Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:31.091Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
{"level":"info","ts":"2023-12-12T07:11:58.611Z","logger":"ebpf-client","msg":"Flow Info:  ","Src IP":"10.1.13.226","Src Port":443,"Dest IP":"10.1.206.228","Dest Port":56068,"Proto":"TCP","Verdict":"DENY"}
jayanthvn commented 8 months ago

Actually I missed your comment -

So pod was created at 14:00:00 went into running state at 14:00:02 network policy was attached at 14:00:03. It looks like all connections that were established between 14:00:02 and 14:00:03 is automatically blocked after network policy is applied.

1.0.7 doesn't fix it..Since I assumed it was long standing connection. Local conntrack cache of the agent will only be populated if the network policy allows the connection i.e, for the first packet and future connections will go thru the local conntrack. Here seems like sts connection is established between 14:00:02 and 14:00:03 and once the deny-all policy is applied everything is getting blocked because we need to explicitly allow the connection. So the 5s sleep is for NP to enforce?

wiseelf commented 8 months ago

So the 5s sleep is for NP to enforce?

Well, it is not a problem, but it is also not a solution at all. I believe cilium or calico do not have such issue. I'll definitely try them when I have time.

wiseelf commented 8 months ago

Tried cilium and calico, both of them do not have this issue.

jdn5126 commented 8 months ago

Tracking this as another issue that should be addressed by strict mode implementation

ad-zsolt-imre commented 5 months ago

I also see Target Pod doesn't belong to the current pod Identifier messages in the logs and connections are being dropped unexpectedly with aws-network-policy-agent:v1.0.8-eksbuild.1.

For example:

..."caller":"controllers/policyendpoints_controller.go:146","msg":"Processing Pod: ","name:":"vault-0","namespace:":"dev","podIdentifier: ":"something-c4f9b87c4-dev"}
..."caller":"controllers/policyendpoints_controller.go:146","msg":"Target Pod doesn't belong to the current pod Identifier: ","Name: ":"vault-0","Pod ID: ":"something-c4f9b87c4-dev"}
jayanthvn commented 4 months ago

Here the pod attempted to start a connection before NP enforcement and hence response packet is dropped. Pl refer to this https://github.com/aws/aws-network-policy-agent/issues/189#issuecomment-1907586763 for detailed explanation.

Our recommended solution for this is Strict mode, which will gate pod launch until policies are configured against the newly launched pod - https://github.com/aws/amazon-vpc-cni-k8s?tab=readme-ov-file#network_policy_enforcing_mode-v1171

sknmi commented 2 months ago

@jayanthvn how exactly that will help? With the NETWORK_POLICY_ENFORCING_MODE variable set to strict, pods that use the VPC CNI start with a default deny policy, then policies are configured. This is called strict mode. In strict mode, you must have a network policy for every endpoint that your pods need to access in your cluster. Note that this requirement applies to the CoreDNS pods. The default deny policy isn’t configured for pods with Host networking.

Which means initial connection to STS will be denied.

jayanthvn commented 2 months ago

@sknmi - In the original issue adding 5s delay the issue was mitigated. So as explained in https://github.com/aws/aws-network-policy-agent/issues/189#issuecomment-1907586763 when the initial connection was made network policy was not enforced i.e, egress connection happens right after pod startup and before the policies are enforced and there will be no conntrack entry since no probes are attached yet. Before the return traffic arrives the network policy would have enforced and there will be no conntrack entry and the ingress rules in the configured policy do not allow traffic resulting in a drop. Hence adding few seconds delay helped.. With strict mode, pod launch will be blocked until policies are reconciled...

sknmi commented 2 months ago

@jayanthvn strict mode requires to have network policies for kube-system namespace, for example coredns and others. Do you have any template for that case or maybe some best practices? What will happen if we have coredns on fargate nodes?

creinheimer commented 2 months ago

Hi. After days of investigation and wasted time, we found this thread.

We are facing the same issue with standard mode. Things were just randomly failing.

Switching to strict mode is not an option due to the associated risks and the significant amount of work required.

@jayanthvn, are there any concrete plans to address this issue? The team is currently losing trust in the AWS solution for network policies.

jayanthvn commented 1 month ago

@creinheimer - Sorry to hear that and we would be happy to help debug the issue.

Just to clarify, when you mention random failures - Initial connection is allowed before network policy enforcement and the return packet is dropped correct? (https://github.com/aws/aws-network-policy-agent/issues/189#issuecomment-1907586763) Regarding this issue, we are thinking about few alternatives.

Are you on the latest version (v1.1.2) of NP agent? It won't fix this issue but we have fixed few other timeout related issues.