ory / oathkeeper

A cloud native Identity & Access Proxy / API (IAP) and Access Control Decision API that authenticates, authorizes, and mutates incoming HTTP(s) requests. Inspired by the BeyondCorp / Zero Trust white paper. Written in Go.
https://www.ory.sh/?utm_source=github&utm_medium=banner&utm_campaign=hydra
Apache License 2.0
3.2k stars 349 forks source link

Oathkeeper bombards Ory Network with requests after upgrade to 40.x #1161

Open Withel opened 2 months ago

Withel commented 2 months ago

Preflight checklist

Ory Network Project

https://vibrant-dubinsky-6d1qtx5k0i.projects.oryapis.com

Describe the bug

After upgrading from 39.4 to 40.7, @tricky42 notified us, about huge amount of requests made from oathkeeper to ory network's /introspect endpoint, as can be seen on the first screenshot (Upgrade was performed on 23.04 at ~15:58). From our perspective we don't see any issues in the logs or metrics. On the second screenshot you can see oathkeeper traffic (spike is from the moment of upgrade). We didn't change any configuration between the versions (except the naming of log level), and we suspect that the cache might not be working properly.

Screenshot 1: image

Screenshot 2: image

Reproducing the bug

  1. Upgrade oathkeeper to 40.7 from 39.x

Relevant log output

No response

Relevant configuration

authenticators:
    oauth2_introspection:
      config:
        cache:
          enabled: true
          max_cost: 100
          ttl: 1h
        introspection_request_headers:
          Authorization: [REDACTED]
        introspection_url: https://[REDACTED]/admin/oauth2/introspect
        retry:
          give_up_after: 3s
          max_delay: 500ms
        token_from:
          header: Authorization
      enabled: true

Version

40.7

On which operating system are you observing this issue?

None

In which environment are you deploying?

Kubernetes with Helm

Additional Context

No response

aeneasr commented 2 months ago

Thank you for the report! Can you pin-point which version introduced this regression? It would make the search for the regression much easier!

Withel commented 2 months ago

I'm not sure if I understand you correctly, as I wrote in the description, we have upgraded from 39.4 to 40.7 or do you mean something else?

marcinfigiel commented 2 months ago

It's worth mentioning that this is our second attempt with the upgrade to 0.40.x. First time we tried with 0.40.6 and had the same effect.

aeneasr commented 2 months ago

Since there are a couple of versions between 39.4 and 40.6 I wanted to know if you specifically can pin point which version exactly introduced the issue, making it easier to find the root cause

marcinfigiel commented 2 months ago

Since there are a couple of versions between 39.4 and 40.6 I wanted to know if you specifically can pin point which version exactly introduced the issue, making it easier to find the root cause

Unfortunately not, we've only tried these two versions :(

Withel commented 2 months ago

@aeneasr we've managed to pin-point exact version which introduces this issue. This happens between v0.39.4 and v0.40.0. Hope this helps, please let me know if you need anything else.

aeneasr commented 2 months ago

I think your max_cost is just way too low. I see that we also use a misleading configuration in ristretto. The default max_cost is set to 100000000 which I think is way too high.

Try playing around with that value to see if it has an impact.

aeneasr commented 2 months ago

Fix for some of the config values: https://github.com/ory/oathkeeper/commit/23730570e0ac35bec214f78a0e951501f58c598b

aeneasr commented 2 months ago

Basically before we were using the internal cost which is I think the key length + cost function. Since your maxcost is like 100 the cache probably ran out of space after one or two keys so it's constantly evicting your values.

The fixes ignore the internal cost so you actually get 1 cost = 1 token instead of 1 token = 1 cost + cost of key

Withel commented 2 months ago

Soooo, with the help of @Demonsthere, we checked following values of the max_cost value:

~15:05 CET: 100M (default) -> 300-400 req/m with initial spike 
~15:35 CET: 50M -> 300-400 req/m
~15:55 CET: 5M -> for a second dropped to ~270 req/m, then increased to ~300-400 req/m

And here's the graph: image

I hope this helps, for the time being we will go back to the 39.4 and we'll wait for the further updates. Please let us know if you need anything else.

aeneasr commented 2 months ago

So what you're saying is that it doesn't have an effect?

Withel commented 2 months ago

So, I must admitt that I made a mistake that I realised just after posting last comment. Unfortunately, I didnt keep an eye on the pods after deployment and turned out that they were not restarting after changing max_count. From what I know oathkeeper does not have configuration hot reload, thus I will have to redo those experiments. Nevertheless I’m not sure when I’ll be able to perform those experiments since right now Im out of the office, but Ill do this as soon as possible. Apologies for that.

aeneasr commented 2 months ago

Hot reloading only works for things that can be changed during runtime. Caches unfortunately are large memory objects that are allocated at process start and can not be changed at runtime.

Withel commented 2 months ago

So, I have redo the tests (now making sure that after each update, pods are restarted correctly) and here are the results. Values same as before:

Before - 39.4 with max_count set to 100
9:30 CET - 40.7 with max_count set to 100 
~9:55 CET - 40.7 with max_count removed (set to 100M, default)
~10:20 CET - 40.7 with max_count set to 50M
~10:50 CET - 40.7 with max_count set to 5M

Results (thanks to @tricky42 curtesy): image

Also, we were suspecting that maybe oathkeeper containers are running out of memory, but we confirmed that it's not the case: image

aeneasr commented 1 month ago

OK so increasing the cache size fixes the problem?