openshift / openshift-sdn

Apache License 2.0
69 stars 63 forks source link

Service endpoint isolation #178

Closed danwinship closed 9 years ago

danwinship commented 9 years ago

Problem: because we only handle service isolation on the source end, you can work around isolation (of both pods and services) by creating a headless service and then manually assigning it endpoints in another project

Solution 1: Use pure-iptables-based service proxying, then do isolation at the destination rather than the source. But this turns out to not work because we still have to masquerade the packets (or else the service will try to send its response directly back to the client, which will reject them because it doesn't have an open connection to the service pod IP) so the destination doesn't know the source IP.

Solution 2: Use OVS-based service proxying instead of iptables so we can carry the source VNID through the whole process. But we still need to masquerade, for the same reason as with iptables, so this can't be done without OVS conntrack support. (Though I think this is the eventual best solution.)

Solution 3: Find some other way to pass the VNID through kube-proxy... Like... maybe kube-proxy would bind its outgoing connection to a source port that had some mathematical relation to the VNID and the destination would check that? Or not.

Solution 3a: use IPv6 addresses and encode information into some of the excess bits. Except we can't use IPv6.

Solution 4: Filter the illegal endpoints out of the list passed to Proxier so we can guarantee that source-side filtering will be sufficient.

This branch implements solution 4. The origin side is at https://github.com/danwinship/origin/commits/endpoint-isolation.

The osdnapi.Pod stuff is going to conflict with Ravi's branch...

@pravisankar @dcbw

danwinship commented 9 years ago

The osdnapi.Pod stuff is going to conflict with Ravi's branch...

i've just repushed this rebased on top of that

pravisankar commented 9 years ago

Rest of the changes LGTM