Closed drpaneas closed 6 years ago
I've also tried the instructions found in https://github.com/cilium/star-wars-demo/blob/master/README.md. Still the same result:
admin:~/star-wars-demo # kubectl create -f 01-deathstar.yaml -f 02-xwing.yaml
service "deathstar" created
deployment.extensions "deathstar" created
deployment.extensions "spaceship" created
deployment.extensions "xwing" created
admin:~/star-wars-demo # kubectl get pods
NAME READY STATUS RESTARTS AGE
deathstar-99f54944f-5zbrj 1/1 Running 0 35s
deathstar-99f54944f-f5pgd 1/1 Running 0 35s
deathstar-99f54944f-lphk9 1/1 Running 0 35s
spaceship-d9f5db749-bt647 1/1 Running 0 35s
spaceship-d9f5db749-hs597 1/1 Running 0 35s
spaceship-d9f5db749-qmsqk 1/1 Running 0 35s
spaceship-d9f5db749-zdxbj 1/1 Running 0 35s
xwing-585b668b8d-nmblb 1/1 Running 0 35s
xwing-585b668b8d-sj8d9 1/1 Running 0 35s
xwing-585b668b8d-xkk7t 1/1 Running 0 35s
admin:~/star-wars-demo # kubectl exec -ti xwing-585b668b8d-nmblb -- curl -XGET deathstar.default.svc.cluster.local/v1/
{
"name": "Death Star",
"model": "DS-1 Orbital Battle Station",
"manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems",
"cost_in_credits": "1000000000000",
"length": "120000",
"crew": "342953",
"passengers": "843342",
"cargo_capacity": "1000000000000",
"hyperdrive_rating": "4.0",
"starship_class": "Deep Space Mobile Battlestation",
"api": [
"GET /v1",
"GET /v1/healthz",
"POST /v1/request-landing",
"PUT /v1/cargobay",
"GET /v1/hyper-matter-reactor/status",
"PUT /v1/exhaust-port"
]
}
admin:~/star-wars-demo # kubectl create -f policy/l7_policy.yaml
ciliumnetworkpolicy.cilium.io "deathstar-api-protection" created
admin:~/star-wars-demo # kubectl exec -ti xwing-585b668b8d-nmblb -- curl -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded
goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
/code/src/github.com/empire/deathstar/
temp/main.go:9 +0x64
main.main()
/code/src/github.com/empire/deathstar/
temp/main.go:5 +0x85
It turned out to be problem with registry.opensuse.org/devel/caasp/kubic-container/container/kubic/cilium:1.2.1
container image (lack of cilium-envoy
binary in container).
It seems that this L7 policy is not working, because openSUSE image doesn't contain cilium-envoy
. So, basically, we need to enable Envoy support.
@drpaneas If you are curious, we are still struggling with packaging Envoy, here is the discussion with upstream devs which will help us. https://github.com/envoyproxy/envoy/pull/4585
Let's close this issue and figure out that issue internally. Sorry for the noise!
@tgraf please close
I was following the very nice and interesting read: (https://cilium.readthedocs.io/en/v1.2/gettingstarted/gsg_starwars/) but instead of
minikube
I am trying to test cilium in SUSE CaaSP. Everything seems to be fine so far, apart from the last part which is related to L7.Expected Behavior:
Actual Behavior
Debugging information below:
My cluster
In your example, you are using minikube which is a single node cluster. I am using a multi-node cluster: 1 master and 2 workers.
Deploy the Demo Application
Each pod will go through several states until it reaches
Running
at which point the pod is ready.Each pod will be represented in Cilium as an Endpoint.
Since I have 3 nodes (1 master, 2 workers) I guess it's only normal that there are 3 cilium pods. Please let me know if this setup is not expected. We can invoke the cilium tool inside the Cilium pods to list them, so I had to make an array in bash that includes all of my cilium pods. Otherwise, if I use one pod I get different results (meaning that deathstar or xwing one time, only deathstar next time, or nothing, etc) every time.
Apply an L3/L4 Policy
Check Current Access
If I check the policy again, the ingress policy enforment is now enabled for deathstar:
Apply and Test HTTP-aware L7 Policy
The problem with that is that if you have 2 replicas and you 'explode' them twice, then your containers are down:
The fix would be to limit tiefighter to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port).
Do you know what is going wrong and the L7 rule is still exploding?