emissary-ingress / emissary

open source Kubernetes-native API gateway for microservices built on the Envoy Proxy
https://www.getambassador.io
Apache License 2.0
4.37k stars 687 forks source link

Ambassador 0.70.0 is not able to correctly separate a resource in the `getambassador.io` `apiGroup` from a resource of the same name in a different `apiGroup` #1557

Closed iNoahNothing closed 5 years ago

iNoahNothing commented 5 years ago

I just ran that test of creating a bogus CRD in the example.com/v1 apiGroup named Mapping to see if Ambassador could handle multiple resources of the same name correctly. Here are my findings:

  1. First, I installed Ambassador from https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml and created a Mapping resource to expose the tour ui app.
  2. Then, I created the following CRD and Mapping resource in the example.com apiGroup and Ambassador ignored it
    
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
    name: mappings.example.com
    spec:
    group: example.com
    version: v1
    versions:
    - name: v1
    served: true
    storage: true
    scope: Namespaced
    names:
    plural: mappings
    singular: mapping
    kind: Mapping

apiVersion: example.com/v1 kind: Mapping metadata: name: ui-mapping spec: prefix: /ui/ service: tour-ui

3. Then, I left the above CRD and `Mapping` resource in the cluster and deleted and reapplied Ambassador. Intially, Ambassador errored on an RBAC resource error:

2019/05/20 16:30:20 kubebootstrap: WORKER PANICKED: mappings.example.com is forbidden: User "system:serviceaccount:default:ambassador" cannot list mappings.example.com at the cluster scope goroutine 134 [running]: runtime/debug.Stack(0xc00012b980, 0x1216fc0, 0xc000202240) /usr/local/go/src/runtime/debug/stack.go:24 +0xa7 github.com/datawire/teleproxy/pkg/supervisor.(Supervisor).launch.func1.1.1(0xc00012bf88) /home/circleci/repo/pkg/supervisor/supervisor.go:305 +0x60 panic(0x1216fc0, 0xc000202240) /usr/local/go/src/runtime/panic.go:513 +0x1b9 github.com/datawire/teleproxy/pkg/k8s.(Watcher).sync(0xc0004be400, 0xc0003ed3b8, 0x8) /home/circleci/repo/pkg/k8s/watcher.go:242 +0x218 github.com/datawire/teleproxy/pkg/k8s.(Watcher).Start(0xc0004be400) /home/circleci/repo/pkg/k8s/watcher.go:225 +0xe9 main.(kubebootstrap).Work(0xc000136c40, 0xc000560400, 0xc00053e120, 0xc0003f9f28) /home/circleci/repo/cmd/watt/kubewatchman.go:145 +0x55 main.(kubebootstrap).Work-fm(0xc000560400, 0x13f5630, 0xc0003f9f88) /home/circleci/repo/cmd/watt/main.go:107 +0x34 github.com/datawire/teleproxy/pkg/supervisor.(Supervisor).launch.func1.1(0xc00012bf88, 0xc000136d90, 0xc000560400) /home/circleci/repo/pkg/supervisor/supervisor.go:310 +0x70 github.com/datawire/teleproxy/pkg/supervisor.(Supervisor).launch.func1(0xc000136d90, 0xc000560400, 0xc000136cb0) /home/circleci/repo/pkg/supervisor/supervisor.go:311 +0x51 created by github.com/datawire/teleproxy/pkg/supervisor.(Supervisor).launch /home/circleci/repo/pkg/supervisor/supervisor.go:300 +0xb2

2019/05/20 16:30:20 api[1]: http: Server closed


4. To bypass this and see what happens, I added the `example.com` apiGroup to the `ClusterRole` and restarted ambassador. Ambassador then spun up fine but erred when trying to read the `mappings.example.com` resource. Resulting in an envoy.json with only the base configs:

{ "@type": "/envoy.config.bootstrap.v2.Bootstrap", "static_resources": { "clusters": [ { "connect_timeout": "3.000s", "dns_lookup_family": "V4_ONLY", "lb_policy": "ROUND_ROBIN", "load_assignment": { "cluster_name": "cluster_127_0_0_1_8877", "endpoints": [ { "lb_endpoints": [ { "endpoint": { "address": { "socket_address": { "address": "127.0.0.1", "port_value": 8877, "protocol": "TCP" } } } } ] } ] }, "name": "cluster_127_0_0_1_8877", "type": "STRICT_DNS" } ], "listeners": [ { "address": { "socket_address": { "address": "0.0.0.0", "port_value": 8080, "protocol": "TCP" } }, "filter_chains": [ { "filters": [ { "config": { "access_log": [ { "config": { "format": "ACCESS [%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\"\n", "path": "/dev/fd/1" }, "name": "envoy.file_access_log" } ], "http_filters": [ { "name": "envoy.cors" }, { "name": "envoy.router" } ], "http_protocol_options": { "accept_http_10": false }, "normalize_path": true, "route_config": { "virtual_hosts": [ { "domains": [ "*" ], "name": "backend", "routes": [ { "match": { "case_sensitive": true, "prefix": "/ambassador/v0/check_ready" }, "route": { "prefix_rewrite": "/ambassador/v0/check_ready", "priority": null, "timeout": "10.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_127_0_0_1_8877", "weight": 100 } ] } } }, { "match": { "case_sensitive": true, "prefix": "/ambassador/v0/check_alive" }, "route": { "prefix_rewrite": "/ambassador/v0/check_alive", "priority": null, "timeout": "10.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_127_0_0_1_8877", "weight": 100 } ] } } }, { "match": { "case_sensitive": true, "prefix": "/ambassador/v0/" }, "route": { "prefix_rewrite": "/ambassador/v0/", "priority": null, "timeout": "10.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_127_0_0_1_8877", "weight": 100 } ] } } } ] } ] }, "server_name": "envoy", "stat_prefix": "ingress_http", "use_remote_address": true, "xff_num_trusted_hops": 0 }, "name": "envoy.http_connection_manager" } ], "use_proxy_proto": false } ], "name": "ambassador-listener-8080" } ] } }

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.