Open knizhnik opened 1 month ago
functions
: 31.8% (7892 of 24837 functions)
lines
: 49.5% (62456 of 126292 lines)
* collected from Rust tests only
See: neondatabase/postgres#501
I don't think we should be exposing this GUC in the global namespace, but rather from the extension's namespace (so we can only enable this behavior when the extension is loaded)
Sorry, I do not understand it. We are using this GUC in Postgres core code. Certainly we can register variable in Postgres code byut define GUC which changes it in neon extension. But this is IMHO strange solution. We can also add yet another hook (what to do on misconfig) and define it in Neon extension.
But I think that such "less strict" policy for checking primary/replica configuration will be useful not only to Neon. So may we sometimes it is possible to propose this patch to community.
As for
!recoveryPauseOnMisconfig
, this can cause us to see more active transactions on a standby than that standby expects.What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?
I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.
In case of some other cases of misconfiguration (for example max_prepared_transations is smaller at primary than replica will crash with the fatal error:
PG:2024-09-23 06:29:50.661 GMT [38233] FATAL: maximum number of prepared transactions reached
PG:2024-09-23 06:29:50.661 GMT [38233] HINT: Increase max_prepared_transactions (currently 5).
PG:2024-09-23 06:29:50.661 GMT [38233] CONTEXT: WAL redo at 0/154CAA8 for Transaction/PREPARE: gid t5: 2024-09-23 06:29:50.661593+00
PG:2024-09-23 06:29:50.662 GMT [38230] LOG: startup process (PID 38233) exited with exit code 1
PG:2024-09-23 06:29:50.662 GMT [38230] LOG: terminating any other active server processes
PG:2024-09-23 06:29:50.663 GMT [38230] LOG: shutting down due to startup process failure
PG:2024-09-23 06:29:50.663 GMT [38230] LOG: database system is shut down
after which control plane should restart replica with synced config parameters and so next time recon try should succeed.
What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?
I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.
I don't care about LR, I care about normal replication, which does not serialize transactions. And in that case, we're probably writing visibility information into data structs sized to MaxBackends, while we're writing > MaxBackends values into those, which is probably very unsafe.
Did you check that visibility information is correctly applied even at large concurrency?
Note that a replica's transaction state handling mechanism is managed by 64 * (MaxBackends + max_prepared_xacts)
, so disarming this check is bound to create errors elsewhere during replay of transaction state.
E.g. spin up a replica with max_connections=5
, a primary with max_connections=1000
, and start a transaction in all those max_connections
of the primary. If my math is correct, the WAL redo process on the replica will throw an error before even half of the connections' transaction IDs have been received because it ran out of space to put those transaction IDs. I had to check that we didn't silently write into shared structs, and am happy we don't, but it's really not good to remove protections and assume everything still works just fine because you did some light testing, because usually checks are in place to protect us against exactly those extreme cases.
What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?
I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.
I don't care about LR, I care about normal replication, which does not serialize transactions. And in that case, we're probably writing visibility information into data structs sized to MaxBackends, while we're writing > MaxBackends values into those, which is probably very unsafe.
Did you check that visibility information is correctly applied even at large concurrency?
Note that a replica's transaction state handling mechanism is managed by
64 * (MaxBackends + max_prepared_xacts)
, so disarming this check is bound to create errors elsewhere during replay of transaction state.E.g. spin up a replica with
max_connections=5
, a primary withmax_connections=1000
, and start a transaction in all thosemax_connections
of the primary. If my math is correct, the WAL redo process on the replica will throw an error before even half of the connections' transaction IDs have been received because it ran out of space to put those transaction IDs. I had to check that we didn't silently write into shared structs, and am happy we don't, but it's really not good to remove protections and assume everything still works just fine because you did some light testing, because usually checks are in place to protect us against exactly those extreme cases.
Sorry, many different tickets are missed in my head:( Certainly I am not speaking about physical replication. And my reply that LR serialise transaction is completely irrelevant.
But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:
def test_physical_replication_config_mismatch(neon_simple_env: NeonEnv):
env = neon_simple_env
with env.endpoints.create_start(
branch_name="main",
endpoint_id="primary",
) as primary:
with primary.connect() as p_con:
with p_con.cursor() as p_cur:
p_cur.execute(
"CREATE TABLE t(pk bigint primary key, payload text default repeat('?',200))"
)
time.sleep(1)
with env.endpoints.new_replica_start(
origin=primary,
endpoint_id="secondary",
config_lines=["max_connections=5"],
) as secondary:
with secondary.connect() as s_con:
with s_con.cursor() as s_cur:
cursors = []
for i in range(90):
p_con = primary.connect()
p_cur = p_con.cursor()
p_cur.execute("begin")
p_cur.execute("insert into t (pk) values (%s)", (i,))
cursors.append(p_cur)
for p_cur in cursors:
p_cur.execute("commit")
time.sleep(5)
s_cur.execute("select count(*) from t")
assert s_cur.fetchall()[0][0] == 90
s_cur.execute("show max_connections")
assert s_cur.fetchall()[0][0] == '5'
Any idea why it work?
But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:
I said 1000, not 100.
The issue occurs at n_entries >= 64 * max_connections
, so you'll have to consume 320 (multi)xids for that array to fill up at max_connections=5
. I'd test with 1, but I'm not sure we can start with max_connections=1
. If we can, you can use that as replica node setting instead.
But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:
I said 1000, not 100.
The issue occurs at
n_entries >= 64 * max_connections
, so you'll have to consume 320 (multi)xids for that array to fill up atmax_connections=5
. I'd test with 1, but I'm not sure we can start withmax_connections=1
. If we can, you can use that as replica node setting instead.
Sorry, can you explain the source of this formula: n_entries >= 64 * max_connections
I changed test to 400 connections at primary and 4 at replica and it still passed:
def test_physical_replication_config_mismatch(neon_simple_env: NeonEnv):
env = neon_simple_env
with env.endpoints.create_start(
branch_name="main",
endpoint_id="primary",
config_lines=["max_connections=500"],
) as primary:
with primary.connect() as p_con:
with p_con.cursor() as p_cur:
p_cur.execute(
"CREATE TABLE t(pk bigint primary key, payload text default repeat('?',200))"
)
time.sleep(1)
with env.endpoints.new_replica_start(
origin=primary,
endpoint_id="secondary",
config_lines=["max_connections=4"],
) as secondary:
with secondary.connect() as s_con:
with s_con.cursor() as s_cur:
cursors = []
for i in range(400):
p_con = primary.connect()
p_cur = p_con.cursor()
p_cur.execute("begin")
p_cur.execute("insert into t (pk) values (%s)", (i,))
cursors.append(p_cur)
time.sleep(5)
for p_cur in cursors:
p_cur.execute("commit")
time.sleep(2)
s_cur.execute("select count(*) from t")
assert s_cur.fetchall()[0][0] == 400
With 900 connections at primary test also passed
OK, I've found a case where we hit the elog(ERROR)
in KnownAssignedXidsAdd on the secondary.
Configuration:
Primary:
max_connections=1000
Secondary:
max_connections=2
autovacuum_max_workers=1
max_worker_processes=5
max_wal_senders=1
superuser_reserved_connections=0
Execute 650+ concurrently on the primary, e.g. with pgbench -c 990 -f script.sql
:
BEGIN;
INSERT INTO test SELECT 1;
SELECT pg_sleep(10);
COMMIT;
You can adjust the secondary's max_connections
upward if you increase the number of subxacts consumed by the benchmark transaction before the pg_sleep() operation.
@knizhnik can you verify my findings?
@knizhnik can you verify my findings?
I have created test based on your scenario and reproduced FATAL: too many KnownAssignedXids
But is actually expected behaviour!
Yes, we know that smaller values of some critical parameters can cause recovery failure.
Probability of it is expected to be very small, but it can happen and cause replica restart.
As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.
So, I do not treat this error as a reason of rejecting this approach, do you?
As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.
I don't see it that way. If a user has a workload that causes this crash, then it's likely they will hit this again. And I don't like the idea of a primary that can consistently cause a secondary to crash.
As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.
I don't see it that way. If a user has a workload that causes this crash, then it's likely they will hit this again. And I don't like the idea of a primary that can consistently cause a secondary to crash.
Well, somebody needs to make a decision. I see very good reasons to support different configurations of primary and replica (different workloads, OLTP vs. OLAP,...).
Probability of such kind of problems is very very low. In your case we need to specify max_connections=1000 for primary and just 2 for replica. In rel life nobody never will setup such configuration. Moreover - we do not allow user to alter GUCs which are critical for replication (like max_connection, max_prepared_transactions,...). Values of some of this GUCs are now fixed and some of them depends on number of CU. And possible range of values for example for max_connections
excludes such situations and crashes of replica.
Also, as far as I understand @hlinnaka is going to use his patch with CSN at replica which will completely eliminate this problem with known XIDs. Yes, that may not happen soon (still I hope that we will do it before patch will be committed in vanilla).
And last moment: if some customer is manager to spawn 1000 active transactions, then most likely he will be faced with many other problems (OOM, local disk space exhaustion, ...) much ore critical than problems with replication.
Problem
See https://github.com/neondatabase/neon/issues/9023
Summary of changes
Ass GUC
recovery_pause_on_misconfig
allowing not to pause in case of replica and primary configuration mismatchSee https://github.com/neondatabase/postgres/pull/501 See https://github.com/neondatabase/postgres/pull/502 See https://github.com/neondatabase/postgres/pull/503 See https://github.com/neondatabase/postgres/pull/504
Checklist before requesting a review
Checklist before merging