Open hania-elayoubi opened 3 years ago
I think this is a reasonable request. There are now more options available via https://github.com/envoyproxy/envoy/pull/14588 but I don't think it's possible yet to hit multiple limits. cc @kyessenov
Yeah, it's only possible with descriptors. I guess we could add route name / vhost name as another rate limit action as a workaround.
it would be useful to allow multiple match configurations in the same time as well. I'm trying to apply rate for all requests as well as different rate for a specific header, where exhaustion of the overall bucket should stop the more specific match as well, without success.
the stage property in the https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-msg-config-route-v3-ratelimit
cannot be used as the typed_per_filter_config
on route is map with the filter name as key and the stage is directly on https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/local_ratelimit/v3/local_rate_limit.proto#envoy-v3-api-msg-extensions-filters-http-local-ratelimit-v3-localratelimit in the value
trying to apply 2 filters with different names seems to result in one of them being ignored
this is using istio 1.10.2/envoy 1.18.2
I think we need API re-work for local rate limit to achieve this. There are several possible approaches:
typed_per_filter_config
by the extension name rather than extension type.fallthrough
boolean to descriptors that indicates whether matching should continue.Thoughts on API design from @envoyproxy/api-shepherds are welcome here.
(3) sounds like the best option to me, though I haven't fully wrapped my head around (1).
Re: (2) we should solve that problem anyway. I think this is leftover tech debt from when filters were identified by their name strings as types.
Yeah, we do need to solve (2) anyway. For context, see #12274.
(Yes agreed on solving (2) additionally)
I agree that (2) has wider benefits and satisfies the needs of two local rate limit. There's probably some space for performance optimizations to avoid multiple computations of rate limit descriptors from two instances of the filter. Do you think it's worth moving rate limit definitions from VH/route to filter config as well?
Do you think it's worth moving rate limit definitions from VH/route to filter config as well?
I thought this was already done? Don't we already use typed filter config?
The typed filter config supplies descriptors with buckets. But rate limit actions are defined per route itself. So there's no scoping of rate limit actions to individual instances of rate limit filter (local or global).
The typed filter config supplies descriptors with buckets. But rate limit actions are defined per route itself. So there's no scoping of rate limit actions to individual instances of rate limit filter (local or global).
Oh hmm, sorry I thought we had cleaned this up by now. Yes, this is just long ago legacy that should be cleaned up. I think all rate limit stuff should be removed from route and moved into typed filter config. If we do that along with (2) that would be a nice way of handling this I agree. (CORS is in a similar situation and has a tracking issue, I just thought rate limit was already done. I guess not.)
I'm working on RateLimit implementation in gRPC. I just was looking into "special handling" we'd have to do to make RateLimit filter work with its config fragmented into VirtualHost.rate_limits
/RouteAction.rate_limits
, and HttpFilter.typed_config
.
Getting this cleanup done will be a great help, and will prevent a lot of temporary code/tech debt.
cc @yanavlasov
Tell me if there is any workaround to specify two local rate limits?
@nikolasj depends what do you need . there was relatively recent change where multiple buckets matching is supported. see https://github.com/envoyproxy/envoy/pull/20869 an related https://github.com/envoyproxy/envoy/pull/25139
@jcetkov I need to specify two local rate limits for url/path. one limit, 3 requests per second. the second limit is 30 requests per minute. For both of these limits to work.
Can it be done? It seemed to me that no. If not, where can I see an example of how to do this?
that would be trickier, as you can't have 2 descriptors with the same value and if you try to have 2 actions with the same header_name but different descriptor_key it outright doesn't work.
I think you can achieve this by having a route per path with the main bucket at 3/s and a descriptor for the same path at 30/minute
something like this: (I was running this in the docker with a simple http listener on 8086, modify to your needs)
static_resources:
listeners:
- name: main
address:
socket_address:
address: 0.0.0.0
port_value: 8888
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: auto
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/foo"
route:
rate_limits:
- stage: 0
actions:
- request_headers:
header_name: :path
descriptor_key: path
cluster: ext_web_service
typed_per_filter_config:
envoy.filters.http.local_ratelimit:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
value:
stat_prefix: http_local_rate_limiter
token_bucket:
max_tokens: 3
tokens_per_fill: 3
fill_interval: 1s
filter_enabled:
runtime_key: local_rate_limit_enabled
default_value:
numerator: 100
denominator: HUNDRED
filter_enforced:
runtime_key: local_rate_limit_enforced
default_value:
numerator: 100
denominator: HUNDRED
response_headers_to_add:
- append: false
header:
key: x-local-rate-limit
value: 'true'
descriptors:
- entries:
- key: path
value: /foo
token_bucket:
max_tokens: 30
tokens_per_fill: 30
fill_interval: 60s
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
format: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_METADATA(istio.mixer:status)%\" \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
path: /dev/stdout
http_filters:
- name: envoy.filters.http.local_ratelimit
typed_config:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
value:
stat_prefix: http_local_rate_limiter
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: ext_web_service
connect_timeout: 10s
type: STATIC
lb_policy: round_robin
load_assignment:
cluster_name: ext_web_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 192.168.1.132
port_value: 8086
@jcetkov Tell me please. I tried using envoyfilter to do. But the restriction works for me on all requests, although I specified only for /test path. Can you tell me what I'm wrong about?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: local-ratelimit
spec:
workloadSelector:
labels:
app: echo-server
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.local_ratelimit
typed_config:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
value:
stat_prefix: http_local_rate_limiter
token_bucket:
max_tokens: 3
tokens_per_fill: 3
fill_interval: 1s
filter_enabled:
runtime_key: local_rate_limit_enabled
default_value:
numerator: 100
denominator: HUNDRED
filter_enforced:
runtime_key: local_rate_limit_enforced
default_value:
numerator: 100
denominator: HUNDRED
response_headers_to_add:
- append: false
header:
key: x-local-rate-limit
value: 'true'
descriptors:
- entries:
- key: path
value: "/test"
token_bucket:
max_tokens: 3
tokens_per_fill: 3
fill_interval: 1s
now you are talking about istio specifics, which is a bit out of scope here. you applied it to all routes in envoy with this. You'd have to have a virtual service matching your path (to have a corresponding route in envoy) and have a match rule in your envoy filter to only apply to given route. you can check :15000/config_dump
in your pods to see how the effective envoy configuration looks.
It seems that with the help of envoyfilter it is impossible to specify only specific paths for the restriction. It looks like you need a ratelimiter service to raise
it is very possible. you also missed the part of the configuration at the route level where the descriptor action is defined.
rate_limits:
- stage: 0
actions:
- request_headers:
header_name: :path
descriptor_key: path
but you are also applying the default bucket with 3/s to everything everywhere. If you make that big enough and have the descriptors for what you want, it works quite nicely. you were however asking for workaround for 2 rates on the same match - and as I said, for that you'd have to have the specific route first and then envoyfilter just that specific route using the route match https://istio.io/latest/docs/reference/config/networking/envoy-filter/#EnvoyFilter-RouteConfigurationMatch-RouteMatch (to get effectively to the config I posted above)
but for second time, this issue is about envoy capability, not about flexibility with which you can configure it in istio...
edit: to apply the descriptor limiting in istio, you need 2 filters. one that edit the route, adding the rate_limits
and the typed_per_filter_config
and second one, that adds the rate limiting filter to the filter chain.
Please read the documentation. While it's quite complex (as was acknowledged earlier on this thread, it is documented sufficiently and with examples https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/local_rate_limit_filter#example-configuration
@jcetkov
Please read the documentation. While it's quite complex (as was acknowledged earlier on this thread, it is documented sufficiently and with examples https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/local_rate_limit_filter#example-configuration
Unfortunately, there is an example in the documentation that does not work. For unknown reasons. restrictions always work out on all paths((
edit: to apply the descriptor limiting in istio, you need 2 filters. one that edit the route, adding the rate_limits and the typed_per_filter_config and second one, that adds the rate limiting filter to the filter chain.
tell me please. do you have any sample code? From what I've tried, nothing works. but I want to understand whether there is still an opportunity to do without the rate limit service.
I would be very grateful for the help as I am stuck with a solution. istio version 1.17.3
Hi, is there a way to disable the default token bucket when using the per-route envoy.filters.http.local_ratelimit
? In the docs, it mentions that if there are no matching descriptor entries, the default token bucket is used. I want to apply rate limiting to requests with a specific header, and I only have one route in the route_config
.
I could make the default token bucket big enough so that all other requests are allowed; however, I was wondering if there is a way to have the descriptor token bucket only and not the default token bucket so that requests not matching descriptors are not subjected to any rate limiting.
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: {prefix: "/"}
route:
cluster: service_protected_by_rate_limit
rate_limits:
- actions:
- request_headers:
header_name: "my-header"
descriptor_key: my_header
typed_per_filter_config:
envoy.filters.http.local_ratelimit:
"@type": type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
stat_prefix: test
token_bucket: # is it possibly to not have this applied?
max_tokens: 1000
tokens_per_fill: 1000
fill_interval: 60s
filter_enabled:
runtime_key: test_enabled
default_value:
numerator: 100
denominator: HUNDRED
filter_enforced:
runtime_key: test_enforced
default_value:
numerator: 100
denominator: HUNDRED
response_headers_to_add:
- append_action: OVERWRITE_IF_EXISTS_OR_ADD
header:
key: x-test-rate-limit
value: 'true'
descriptors:
- entries:
- key: my_header
value: somevalue
token_bucket:
max_tokens: 10
tokens_per_fill: 10
fill_interval: 60s
no, the refill timer works on the default bucket and the descriptor buckets just ride along. make the default bucket large enough so it's not a concern
Title: Allow setting multiple local rate limit configurations per route or virtual host
Description: It is useful to define rate limits for a few resolutions, for instance 50 requests/second and 1000 requests/minute for the same route or virtual host.
The global rate limiting API allows multiple rate limit configurations per route or virtual host.
The local rate limiting API, however, only allows a single local rate limits configuration.
Using Istio 1.8.1 + Envoy 1.16.1, I am only able to effectively define one local rate limit configuration per route or virtual host. Since I need to define two local rate limit resolutions, I am going around this limitation by defining one token bucket configuration for the inbound virtual host, e.g. 50 requests/second, and another token bucket configuration for the outbound virtual host, e.g 1000 requests/second. If I needed to set up more than two local rate limit resolutions, I wouldn't be able to.