Open andy108369 opened 1 year ago
Hairpinning might be a solution here as @chainzero mentioned.
This network policy is likely what breaks the communication:
https://github.com/akash-network/provider/blob/v0.2.1/cluster/kube/builder/netpol.go#L123-L138
$ kubectl -n $ns get netpol -o yaml
...
...
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
10.0.0.0/8
restriction overlaps kube_service_addresses: 10.233.0.0/18
, kube_pods_subnet: 10.233.64.0/18
, calico_pool_cidr: 10.233.64.0/20
- restricting the communication between different deployments.
$ ipcalc 10.0.0.0/8
Address: 10.0.0.0 00001010. 00000000.00000000.00000000
Netmask: 255.0.0.0 = 8 11111111. 00000000.00000000.00000000
Wildcard: 0.255.255.255 00000000. 11111111.11111111.11111111
=>
Network: 10.0.0.0/8 00001010. 00000000.00000000.00000000
HostMin: 10.0.0.1 00001010. 00000000.00000000.00000001
HostMax: 10.255.255.254 00001010. 11111111.11111111.11111110
Broadcast: 10.255.255.255 00001010. 11111111.11111111.11111111
Hosts/Net: 16777214 Class A, Private Internet
kubespray$ git grep -E '^kube_pods_subnet:|^kube_service_addresses:|^calico_pool_cidr:' | column -t | sort -d -k2,2
roles/kubespray-defaults/defaults/main.yaml:kube_service_addresses: 10.233.0.0/18
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml:kube_service_addresses: 10.233.0.0/18
roles/kubespray-defaults/defaults/main.yaml:kube_pods_subnet: 10.233.64.0/18
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml:kube_pods_subnet: 10.233.64.0/18
docs/calico.md:calico_pool_cidr: 10.233.64.0/20
Now that this netpol makes it secure when it comes to the pods deployed within different namespace (you don't want someone poking your internal app's ports from his deployment), the only feasible solution I see would be allowing the ingress communication by permitting the ingress-nginx:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector: {}
Quick patch:
ns
- is the namespace of the deployment you wish enable the access from to the ingress resources behind the ingress-nginx controller
kubectl -n $ns patch netpol akash-deployment-restrictions --type=json -p='[{"op": "add", "path": "/spec/egress/-", "value":{"to":[{"namespaceSelector":{"matchLabels":{"kubernetes.io/metadata.name":"ingress-nginx"}},"podSelector":{}}]}}]'
It appears that Akash deployment network policy blocks the ingress & nodePort access between two deployments running on the same provider.
They are accessible when deployments running on different providers.
ingress & nodePorts (
global: true
) are expected to be open even when deployments are running on the same provider.