PostgREST / postgrest

REST API for any Postgres database
https://postgrest.org
MIT License
22.65k stars 1k forks source link

IO tests depend too heavily on timing / sleeping #3424

Open wolfgangwalther opened 2 months ago

wolfgangwalther commented 2 months ago

I just ran the IO tests on a heavily overloaded system.. and had 32 tests failing, because of some timings related issues. Those pass just fine without load. This is annoying, because I can't continue working on PostgREST while I have other heavy tasks (rebuilding the world for nixpkgs...) running. This is also potentially an issue on much slower systems.

steve-chavez commented 2 months ago

Maybe we can separate those tests (test_io_timing.py), so they're not always ran. Like what we do with postgrest-test-memory.

wolfgangwalther commented 2 months ago

This seems like a related issue in CI: https://github.com/PostgREST/postgrest/actions/runs/8788477073/job/24116006025?pr=3427

We sure want to run them in CI, so separating them wouldn't help, I guess.

wolfgangwalther commented 2 months ago

Another random CI failure: https://github.com/PostgREST/postgrest/actions/runs/8842033759/job/24280038657?pr=3453

wolfgangwalther commented 2 months ago

And one more: https://github.com/PostgREST/postgrest/actions/runs/8844111670/job/24285424746

wolfgangwalther commented 2 months ago

https://github.com/PostgREST/postgrest/actions/runs/8844127847/job/24285463347

steve-chavez commented 2 months ago

Hm, perhaps some of the pool timeout errors could be solved by using the new /metrics endpoint instead of checking the logs.

Specifically https://postgrest.org/en/latest/references/observability.html#pgrst-db-pool-timeouts-total

steve-chavez commented 2 months ago

Additionally I think the acq timeout can be less than a second https://hackage.haskell.org/package/hasql-pool-1.1/docs/Hasql-Pool-Config.html#v:acquisitionTimeout.. but not sure if our config allows it now

steve-chavez commented 1 month ago

I've also noted sometimes the io tests get stuck: https://github.com/PostgREST/postgrest/actions/runs/8857677646/job/24325423347?pr=3383

I had that happen just one time locally after https://github.com/PostgREST/postgrest/pull/3229. That should be solved by using metrics, I think.

steve-chavez commented 1 month ago

io test got stuck again https://github.com/PostgREST/postgrest/actions/runs/8857677646/job/24325423347?pr=3383

We do some waiting when reading the logs https://github.com/PostgREST/postgrest/blob/7e5fd317e6b6060e31e7120837cec95ee4028869/test/io/postgrest.py#L60-L70

But it's capped at 1 second.. it's weird why it gets stuck

Edit 1: https://stackoverflow.com/questions/24640079/process-stdout-readline-hangs-how-to-use-it-properly

Edit 2: Seems flushing worked https://github.com/PostgREST/postgrest/pull/3465


This looks to be the test that fails https://github.com/PostgREST/postgrest/blob/7e5fd317e6b6060e31e7120837cec95ee4028869/test/io/test_io.py#L1361-L1389

wolfgangwalther commented 1 month ago

One more: https://github.com/PostgREST/postgrest/actions/runs/8949824082/job/24584542317

And one more: https://github.com/PostgREST/postgrest/actions/runs/8952889247/job/24590783068

wolfgangwalther commented 1 month ago

I looked into this a bit - it seems like the main challenge for IO tests is that it's impossible to determine whether PostgREST is currently in the process of reloading the config / schema cache - or has already finished doing so and failed. Or in other words: We only return either 200 or 503 from /ready, but nothing else.

I think it would be helpful if we'd return more fine-grained status codes:

This should allow us to replace the following with calls to some variation of wait_until_ready:

def sleep_until_postgrest_scache_reload():
    "Sleep until schema cache reload"
    time.sleep(0.3)

def sleep_until_postgrest_config_reload():
    "Sleep until config reload"
    time.sleep(0.2)

def sleep_until_postgrest_full_reload():
    "Sleep until schema cache plus config reload"
    time.sleep(0.3)

It would also better for observability, I guess?

steve-chavez commented 1 month ago

@wolfgangwalther Great idea! I agree.

wolfgangwalther commented 1 month ago

This is happening on almost every commit I merge to main now - they just get stuck entirely somewhere in the IO tests. Would be great if @steve-chavez you could look at implementing my proposal above. I am not really familiar with those parts of the code.

steve-chavez commented 1 month ago

@wolfgangwalther Looking into this now. Admin API has another bug too (connection drop not detected), so will fix that first.

steve-chavez commented 1 month ago
  • 200 when ready
  • 503 Service Unavailable while PostgREST is in the process of doing something: During connection to the database, during config reload, during schema cache reload.
  • 500 Internal Server Error as soon as one of the above fails.

Looking more closely at the above proposal, 500 will only happen when the API socket is unreachable. For all other cases we always retry and the status will be 503. This also implies that the /live endpoint will now have to reply with a 500 status to be consistent. Thus, this will be a breaking change.


This changed turned to be a bit more complex than what I thought.

steve-chavez commented 1 month ago

during config reload

Not sure if we should conflate 503 for config reloading. During config reloading we're operational and serving requests normally. Unlike schema cache reloading (makes requests wait) and connection reloading (makes requests fail).

Perhaps we can use another 2xx for config reloading?

steve-chavez commented 1 month ago

Thus, this will be a breaking change.

To avoid the breaking change and also the problem with responding with 503 while we're operational. How about adding a custom health check endpoint? There we could have the more fine-grained behavior as is.

I'm thinking about a /loaded endpoint. Shouldn't this be better since this endpoint will be only for development/testing purposes too?

@wolfgangwalther WDYT?

wolfgangwalther commented 1 month ago

For all other cases we always retry and the status will be 503.

Do we not have any kind of back-off there? Are we really doing and endless loop of immediately retrying again?

This also implies that the /live endpoint will now have to reply with a 500 status to be consistent. Thus, this will be a breaking change.

503 is considered a "temporary" problem, so I would argue that this is actually a bugfix for /live. Also, I did we really promise "/live fails with 503 exactly" or is the promise our api gives rather like "/live gives you 4xx or 5xx if something is wrong"? I doubt that anyone is using the live endpoint with an exact match on the error code. So I don't think we need to treat this as a breaking change.

Not sure if we should conflate 503 for config reloading. During config reloading we're operational and serving requests normally. Unlike schema cache reloading (makes requests wait) and connection reloading (makes requests fail).

Right, we should not try to put other stuff into the ready endpoint that is clearly not related.

To avoid the breaking change and also the problem with responding with 503 while we're operational. How about adding a custom health check endpoint? There we could have the more fine-grained behavior as is.

I'm thinking about a /loaded endpoint. Shouldn't this be better since this endpoint will be only for development/testing purposes too?

At this stage it might be easier to just add better log parsing to the IO tests, because all this information (config reload started, config reload finished) etc. are available through the log, I think. This should still allow us to remove the quite random "wait for X seconds" cases I mentioned above.

TLDR: I think we don't need to change /ready and/or /live for the purpose of IO tests. If you still want to do the improvement regarding "schema cache reload in progress" - cool.

steve-chavez commented 1 month ago

Do we not have any kind of back-off there? Are we really doing and endless loop of immediately retrying again?

Yes, we do have a backoff.

503 is considered a "temporary" problem, so I would argue that this is actually a bugfix for /live. ...So I don't think we need to treat this as a breaking change.

Agree.

TLDR: I think we don't need to change /ready and/or /live for the purpose of IO tests. If you still want to do the improvement regarding "schema cache reload in progress" - cool.

Cool. Will finish that on https://github.com/PostgREST/postgrest/pull/3490.

At this stage it might be easier to just add better log parsing to the IO tests, because all this information (config reload started, config reload finished) etc. are available through the log, I think. This should still allow us to remove the quite random "wait for X seconds" cases I mentioned above.

Parsing the log is still prone to failure though, implementing a /loaded would be easier. I also wonder if there's another way to signal we're done reloading state...

steve-chavez commented 1 month ago

So check this out, kubernetes has individual health checks.

Since the admin server already has /schema_cache and /config. How about adding a /schema_cache/ready and /config/ready?

wolfgangwalther commented 1 month ago

So check this out, kubernetes has individual health checks.

Since the admin server already has /schema_cache and /config. How about adding a /schema_cache/ready and /config/ready?

Hm, but that's only because k8s has different components which work independently of each other. That's not the case for us. It's not that kubernetes could consume separate ready endpoints in any way. You'd still need only one ready endpoint to actually make use of it.

wolfgangwalther commented 1 month ago

Parsing the log is still prone to failure though

What kinds of failures do you have in mind?

steve-chavez commented 1 month ago

These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system

https://kubernetes.io/docs/reference/using-api/health-checks/#individual-health-checks

I don't see why we cannot adopt that and have /ready/schema_cache and /ready/config, it looks like a great idea/design. Our purpose aligns well with what's described above.

What kinds of failures do you have in mind?

The error message can change, reading the log can hang https://github.com/PostgREST/postgrest/pull/3465 (and this PR didn't solve it, it still happens https://github.com/PostgREST/postgrest/actions/runs/8972240697/job/24639793119).

wolfgangwalther commented 1 month ago

Do we not have any kind of back-off there? Are we really doing and endless loop of immediately retrying again?

Yes, we do have a backoff.

Are you sure the backoff works for example when the schema cache query fails due to timeout like in the test_admin_ready_includes_schema_cache_state test?

I can reproduce the hanging IO tests with this test-case, like this:

This immediately hangs forever.

When I then add a timeout=1 to the postgrest.admin.get('/ready') I get a failing test and some log output. This shortened (I took out everything else except those lines) output looks like this:

08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...
08/May/2024:21:06:46 +0200: Attempting to connect to the database...

Why are the timestamps not increasing here? This doesn't look like a backoff.

This does not explain the hanging test, yet, just an observation so far.

wolfgangwalther commented 1 month ago

Are you sure the backoff works for example when the schema cache query fails due to timeout like in the test_admin_ready_includes_schema_cache_state test?

This indicates that there is no backoff involved and instead the reload of the schema cache is tried 10x / s:

https://github.com/PostgREST/postgrest/blob/f9e974099926b8c1eeab9cfc9fc935fd142bddbf/src/PostgREST/AppState.hs#L169-L175

This does not match my observation, however, because I had 33 attempts to connect with the exact same timestamp (second precision). So either the display of the timestamp is broken, or this is repeated much more than just 10x / s.

I assume the 10x / s debounce only affects the "happy path" inside internalConnectionWorker. But once any path with killThread <mainthread> is involved.. the main thread is restarted immediately without any delay or backoff?

steve-chavez commented 1 month ago

TLDR: I think we don't need to change /ready and/or /live for the purpose of IO tests. If you still want to do the improvement regarding "schema cache reload in progress" - cool.

Merged that on https://github.com/PostgREST/postgrest/pull/3490.

I tried replacing the def sleep_until_postgrest_scache_reload() but it's used in some places where actually it's waiting for the connection recovery to be done, not the schema cache. I'm thinking we not only need a /ready/config but also a /ready/connection, to make testing independent of the timings.

Overall this part of the code is too hard to refactor/modify because we don't have tests https://github.com/PostgREST/postgrest/issues/1766 (my latest attempt at making progress was on https://github.com/PostgREST/postgrest/issues/1766#issuecomment-2098571024). It's effectively at the "make it work" stage for many years now.

steve-chavez commented 1 month ago

This indicates that there is no backoff involved and instead the reload of the schema cache is tried 10x / s:

Right, the backoff is only for the connection recovery (establishConnection), if that succeeds then work is repeated and it goes straight ahead to loadSchemaCache:

https://github.com/PostgREST/postgrest/blob/1b584f7e9ceda69b956e0b50146d8ba05f2f337b/src/PostgREST/AppState.hs#L365-L398

I remember I kept it that way because the schema cache concept is tied to connections too (they have their own cache https://github.com/PostgREST/postgrest/issues/2620). But it could be improved.

If we really want to improve this part we need the testing utilities (https://github.com/PostgREST/postgrest/issues/1766), otherwise any attempt at refactoring will cause regressions.

steve-chavez commented 1 month ago
> 08/May/2024:21:06:46 +0200: Attempting to connect to the database...
> 08/May/2024:21:06:46 +0200: Attempting to connect to the database...
> 08/May/2024:21:06:46 +0200: Attempting to connect to the database...
> ...

I can reproduce the above by just doing alter role postgrest_test_authenticator set statement_timeout to '10' . To add a way to test the failure, I think we could add a metric pgrst_schema_cache_failed_loads_total and check if it increases rapidly.

Additionally, since the connection recovery process is so tied to the schema cache, I think I can use the same backoff for both queries (pg version for the connection, schema cache idem). Otherwise if we have a dedicated backoff for the schema cache query we wouldn't know if it's because the connection is down or if there was a statement timeout.

wolfgangwalther commented 1 month ago

There is another failure of the IO tests after the latest commit to main (changelog): https://github.com/PostgREST/postgrest/actions/runs/9045664975/job/24855643429

This seems just like a simple timeout when starting up postgrest in that test. Quite random, though.

steve-chavez commented 1 month ago

From the log lines, that failure looks like the request was hitting the API server instead of the Admin server.

::1 - postgrest_test_anonymous [11/May/2024:18:35:17 +0000] "GET /ready HTTP/1.1" 404 - "" "python-requests/2.31.0"
11/May/2024:18:35:17 +0000: Connection ec7abfbd-8046-4c73-b9b7-33c131add6cd is used
11/May/2024:18:35:17 +0000: Connection ec7abfbd-8046-4c73-b9b7-33c131add6cd is available

Because:

So maybe an error on the python code? Maybe the freeport() returned the API server port somehow.

wolfgangwalther commented 1 month ago

The log says:

 11/May/2024:18:35:17 +0000: Admin server listening on port 48177
11/May/2024:18:35:17 +0000: Listening on port 48177

So surely quite random. I assume freeport() is called twice and then collides (very low probability).

More interesting is the fact that PostgREST doesn't seem to fail when both ports are set to the same value?

When I try this locally, then I always get postgrest: Network.Socket.bind: resource busy (Address already in use). Maybe it's related to ipv4 / ipv6 somehow? The access log is also giving ::1 as the remote ip.

wolfgangwalther commented 1 month ago

In fact... I can reproduce this:

PGRST_SERVER_PORT=3000 PGRST_SERVER_HOST="localhost" PGRST_ADMIN_SERVER_PORT=3000 postgrest-with-postgresql-16 postgrest-run

This only works when setting host to localhost explicitly. Then I get:

11/May/2024:21:30:27 +0200: Admin server listening on port 3000
11/May/2024:21:30:27 +0200: Listening on port 3000

And then both are listening on different ipv4/ipv6:

curl --fail http://127.0.0.1:3000/ready
curl --fail http://[::1]:3000/ready

both return different results...

But since we do http://localhost:<port> in the io tests for both, we end up on the wrong server.