Open rukai opened 1 year ago
and another:
shotover 01:50:11.551156Z INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover 01:50:11.551248Z INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover 01:50:11.551276Z INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [RedisSinkCluster(RedisSinkClusterConfig { first_contact_points: ["127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225"], direct_destination: Some("127.0.0.1:2220"), tls: None, connection_count: None, connect_timeout_ms: 3000 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover 01:50:11.556594Z INFO shotover_proxy::transforms::redis::sink_cluster: connected to upstream
shotover 01:50:11.556707Z INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover 01:50:11.556761Z INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover 01:50:11.556920Z INFO shotover_proxy::server: accepting inbound connections
shotover 01:50:11.557192Z INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
thread 'redis_int_tests::cluster_handling' panicked at 'called `Result::unwrap()` on an `Err` value: An error was signalled by the server: The specified node is not a master', shotover-proxy/tests/redis_int_tests/basic_driver_tests.rs:1005:10
stack backtrace:
0: rust_begin_unwind
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
1: core::panicking::panic_fmt
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
2: core::result::unwrap_failed
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1790:5
3: core::result::Result<T,E>::unwrap
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1112:23
4: lib::redis_int_tests::basic_driver_tests::is_cluster_replicas_ready::{{closure}}
at ./tests/redis_int_tests/basic_driver_tests.rs:1000:15
5: lib::redis_int_tests::basic_driver_tests::test_cluster_ports_rewrite_nodes::{{closure}}
at ./tests/redis_int_tests/basic_driver_tests.rs:1027:61
6: lib::redis_int_tests::cluster_handling::{{closure}}::{{closure}}
at ./tests/redis_int_tests/mod.rs:256:55
7: <core::pin::Pin<P> as core::future::future::Future>::poll
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/future/future.rs:125:9
8: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:63
9: tokio::runtime::coop::with_budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:102:5
10: tokio::runtime::coop::budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:68:5
11: tokio::runtime::park::CachedParkThread::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:31
12: tokio::runtime::context::BlockingRegionGuard::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/context.rs:315:13
13: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/scheduler/multi_thread/mod.rs:66:9
14: tokio::runtime::runtime::Runtime::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/runtime.rs:284:45
15: lib::redis_int_tests::cluster_handling::{{closure}}
at ./tests/redis_int_tests/mod.rs:258:5
16: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
17: serial_test::serial_code_lock::local_serial_core
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-1.0.0/src/serial_code_lock.rs:25:5
18: lib::redis_int_tests::cluster_handling
at ./tests/redis_int_tests/mod.rs:240:1
19: lib::redis_int_tests::cluster_handling::{{closure}}
at ./tests/redis_int_tests/mod.rs:241:29
20: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
21: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::cluster_handling ... FAILED
and hit again:
shotover 03:15:58.426453Z INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover 03:15:58.426541Z INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover 03:15:58.426568Z INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [RedisSinkCluster(RedisSinkClusterConfig { first_contact_points: ["127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225"], direct_destination: Some("127.0.0.1:2220"), tls: None, connection_count: None, connect_timeout_ms: 3000 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover 03:15:58.432166Z INFO shotover_proxy::transforms::redis::sink_cluster: connected to upstream
shotover 03:15:58.432287Z INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover 03:15:58.432341Z INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover 03:15:58.432459Z INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
shotover 03:15:58.433101Z INFO shotover_proxy::server: accepting inbound connections
thread 'redis_int_tests::cluster_handling' panicked at 'called `Result::unwrap()` on an `Err` value: An error was signalled by the server: The specified node is not a master', shotover-proxy/tests/redis_int_tests/basic_driver_tests.rs:1008:10
stack backtrace:
0: rust_begin_unwind
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
1: core::panicking::panic_fmt
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
2: core::result::unwrap_failed
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1790:5
3: core::result::Result<T,E>::unwrap
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1112:23
4: lib::redis_int_tests::basic_driver_tests::is_cluster_replicas_ready::{{closure}}
at ./tests/redis_int_tests/basic_driver_tests.rs:1003:15
5: lib::redis_int_tests::basic_driver_tests::test_cluster_ports_rewrite_nodes::{{closure}}
at ./tests/redis_int_tests/basic_driver_tests.rs:1030:61
6: lib::redis_int_tests::cluster_handling::{{closure}}::{{closure}}
at ./tests/redis_int_tests/mod.rs:256:55
7: <core::pin::Pin<P> as core::future::future::Future>::poll
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/future/future.rs:125:9
8: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:63
9: tokio::runtime::coop::with_budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:102:5
10: tokio::runtime::coop::budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:68:5
11: tokio::runtime::park::CachedParkThread::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:31
12: tokio::runtime::context::BlockingRegionGuard::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/context.rs:315:13
13: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/scheduler/multi_thread/mod.rs:66:9
14: tokio::runtime::runtime::Runtime::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/runtime.rs:284:45
15: lib::redis_int_tests::cluster_handling::{{closure}}
at ./tests/redis_int_tests/mod.rs:258:5
16: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
17: serial_test::serial_code_lock::local_serial_core
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-1.0.0/src/serial_code_lock.rs:25:5
18: lib::redis_int_tests::cluster_handling
at ./tests/redis_int_tests/mod.rs:240:1
19: lib::redis_int_tests::cluster_handling::{{closure}}
at ./tests/redis_int_tests/mod.rs:241:29
20: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
21: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::cluster_handling ... FAILED
## test redis_int_tests::multi
Should be solved by https://github.com/shotover/shotover-proxy/pull/1086
shotover 04:43:36.664124Z INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover 04:43:36.664157Z INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover 04:43:36.664164Z INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [TuneableConsistencyScatter(TuneableConsistencyScatterConfig { route_map: {"two": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3332", tls: None, connect_timeout_ms: 3000 })], "three": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3333", tls: None, connect_timeout_ms: 3000 })], "one": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3331", tls: None, connect_timeout_ms: 3000 })]}, write_consistency: 2, read_consistency: 2 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover 04:43:36.664206Z WARN shotover_proxy::transforms::distributed::tuneable_consistency_scatter: Using this transform is considered unstable - Does not work with REDIS pipelines
shotover 04:43:36.664249Z INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover 04:43:36.664260Z INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover 04:43:36.664830Z INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
shotover 04:43:36.665030Z INFO shotover_proxy::server: accepting inbound connections
shotover 04:43:36.666437Z INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover 04:43:36.666457Z INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover 04:43:36.666700Z INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
shotover 04:43:36.666903Z INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
shotover 04:43:36.666919Z INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
shotover 04:43:36.666518Z INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
test redis_int_tests::multi has been running for over 60 seconds
thread 'redis_int_tests::multi' panicked at 'assertion failed: `(left == right)`
left: `Err(WRONGTYPE: Operation against a key holding the wrong kind of value)`,
right: `Ok("OK")`', shotover-proxy/tests/redis_int_tests/assert.rs:12:5
stack backtrace:
0: rust_begin_unwind
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
1: core::panicking::panic_fmt
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
2: core::panicking::assert_failed_inner
3: core::panicking::assert_failed
4: lib::redis_int_tests::assert::assert_ok::{{closure}}
5: lib::redis_int_tests::basic_driver_tests::test_tuple_args::{{closure}}
6: lib::redis_int_tests::multi::{{closure}}::{{closure}}
7: tokio::runtime::runtime::Runtime::block_on
8: core::ops::function::FnOnce::call_once
9: serial_test::serial_code_lock::local_serial_core
10: core::ops::function::FnOnce::call_once
11: core::ops::function::FnOnce::call_once
at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::multi ... FAILED
shotover 00:11:38.791417Z INFO shotover::runner: Starting Shotover 0.1.10
shotover 00:11:38.791519Z INFO shotover::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover 00:11:38.791551Z INFO shotover::runner: topology=Topology { sources: {"kafka_source": Kafka(KafkaConfig { listen_addr: "127.0.0.1:9192", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"main_chain": TransformChainConfig([DebugForceEncodeConfig { encode_requests: true, encode_responses: true }, KafkaSinkSingleConfig { address: "127.0.0.1:9092", connect_timeout_ms: 3000, read_timeout: None }])}, source_to_chain_mapping: {"kafka_source": "main_chain"} }
shotover 00:11:38.791726Z INFO shotover::config::topology: Loaded chains ["main_chain"]
shotover 00:11:38.791854Z INFO shotover::sources::kafka: Starting Kafka source on [127.0.0.1:9192]
shotover 00:11:38.791972Z INFO shotover::config::topology: Loaded sources [["kafka_source"]] and linked to chains
shotover 00:11:38.792124Z INFO shotover::server: accepting inbound connections
2023-04-26T00:11:38.794440Z ERROR librdkafka: librdkafka: FAIL [thrd:localhost:9192/bootstrap]: localhost:9192/bootstrap: Connect to ipv6#[::1]:9192 failed: Connection refused (after 0ms in state CONNECT)
2023-04-26T00:11:38.795066Z ERROR rdkafka::client: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): localhost:9192/bootstrap: Connect to ipv6#[::1]:9192 failed: Connection refused (after 0ms in state CONNECT)
2023-04-26T00:11:38.795597Z ERROR rdkafka::client: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
shotover 00:11:45.140390Z INFO shotover::runner: received SIGTERM
shotover 00:11:45.566429Z INFO shotover::runner: Shotover was shutdown cleanly.
shotover 00:11:45.566337Z ERROR connection{id=5 source="KafkaSource"}: shotover::server: failed to send or encode message: Broken pipe (os error 32)
thread 'kafka_int_tests::passthrough_encode' panicked at 'Unexpected event 00:11:45.566337Z ERROR connection{id=5 source="KafkaSource"}: shotover::server: failed to send or encode message: Broken pipe (os error 32)
Any ERROR or WARN events that occur in integration tests must be explicitly allowed by adding an appropriate EventMatcher to the method call.', /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:248:21
stack backtrace:
0: rust_begin_unwind
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/std/src/panicking.rs:579:5
1: core::panicking::panic_fmt
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/panicking.rs:64:14
2: tokio_bin_process::process::BinProcess::consume_remaining_events_inner::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:248:21
3: tokio_bin_process::process::BinProcess::consume_remaining_events::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:198:13
4: tokio_bin_process::process::BinProcess::shutdown_and_then_consume_events::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:186:13
5: lib::kafka_int_tests::passthrough_encode::{{closure}}::{{closure}}
at ./tests/kafka_int_tests/mod.rs:36:51
6: <core::pin::Pin<P> as core::future::future::Future>::poll
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/future/future.rs:125:9
7: <core::pin::Pin<P> as core::future::future::Future>::poll
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/future/future.rs:125:9
8: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:541:57
9: tokio::runtime::coop::with_budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/coop.rs:107:5
10: tokio::runtime::coop::budget
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/coop.rs:73:5
11: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:541:25
12: tokio::runtime::scheduler::current_thread::Context::enter
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:350:19
13: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:540:36
14: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:615:57
15: tokio::macros::scoped_tls::ScopedKey<T>::set
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/macros/scoped_tls.rs:61:9
16: tokio::runtime::scheduler::current_thread::CoreGuard::enter
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:615:27
17: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:530:19
18: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:154:24
19: tokio::runtime::runtime::Runtime::block_on
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/runtime.rs:302:47
20: lib::kafka_int_tests::passthrough_encode::{{closure}}
at ./tests/kafka_int_tests/mod.rs:36:5
21: core::ops::function::FnOnce::call_once
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
22: serial_test::serial_code_lock::local_serial_core
at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-2.0.0/src/serial_code_lock.rs:25:5
23: lib::kafka_int_tests::passthrough_encode
at ./tests/kafka_int_tests/mod.rs:24:1
24: lib::kafka_int_tests::passthrough_encode::{{closure}}
at ./tests/kafka_int_tests/mod.rs:25:31
25: core::ops::function::FnOnce::call_once
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
26: core::ops::function::FnOnce::call_once
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test kafka_int_tests::passthrough_encode ... FAILED
thread 'redis_int_tests::multi' panicked at 'assertion failed: `(left == right)`
left: `Ok(0)`,
right: `Ok(1)`', shotover-proxy/tests/redis_int_tests/assert.rs:16:5
stack backtrace:
0: rust_begin_unwind
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/std/src/panicking.rs:579:5
1: core::panicking::panic_fmt
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/panicking.rs:64:14
2: core::panicking::assert_failed_inner
3: core::panicking::assert_failed
4: lib::redis_int_tests::assert::assert_int::{{closure}}
5: lib::redis_int_tests::basic_driver_tests::test_scan::{{closure}}
6: lib::redis_int_tests::multi::{{closure}}::{{closure}}
7: tokio::runtime::runtime::Runtime::block_on
8: core::ops::function::FnOnce::call_once
9: serial_test::serial_code_lock::local_serial_core
10: core::ops::function::FnOnce::call_once
11: core::ops::function::FnOnce::call_once
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
Seems like a flush didnt work/complete/occur causing undeleted values.
shotover-proxy::lib kafka_int_tests::cluster_1_rack_single_shotover::case_2_java
shotover 02:43:00.064746Z INFO shotover::runner: Starting Shotover 0.3.1
shotover 02:43:00.064770Z INFO shotover::runner: configuration=Config { main_log_level: "info, shotover::connection_span=debug", observability_interface: Some("0.0.0.0:9001") }
shotover 02:43:00.064787Z INFO shotover::runner: topology=Topology { sources: [Kafka(KafkaConfig { name: "kafka", listen_addr: "127.0.0.1:9192", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None, chain: TransformChainConfig([KafkaSinkClusterConfig { first_contact_points: ["172.16.1.2:9092"], shotover_nodes: [ShotoverNodeConfig { address: "127.0.0.1:9192", rack: "rack0", broker_id: 0 }], local_shotover_broker_id: 0, connect_timeout_ms: [30](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:31)00, read_timeout: None, tls: None, authorize_scram_over_mtls: None }]) })] }
shotover 02:43:00.064816Z INFO shotover::sources::kafka: Starting Kafka source on [127.0.0.1:9192]
shotover 02:43:00.064945Z INFO shotover::config::topology: Shotover is now accepting inbound connections
2024-06-17T02:43:00.065661Z INFO j4rs::logger: Setting classpath to -Djava.class.path=/home/runner/work/shotover-proxy/shotover-proxy/target/release/jassets/j4rs-0.18.0-jar-with-dependencies.jar
2024-06-17T02:43:00.066182Z INFO j4rs::logger: Setting library path to -Djava.library.path=/home/runner/work/shotover-proxy/shotover-proxy/target/release/deps:/usr/lib:/lib
2024-06-17T02:43:00.067112Z INFO j4rs::logger: Passing to the Java world the name of the library to load: j4rs-4c4b93671bac0689
2024-06-17T02:43:00.101906Z INFO j4rs::logger: No JVMs exist. Creating a new one...
test kafka_int_tests::cluster_1_rack_single_shotover::case_2_java ... FAILED
failures:
failures:
kafka_int_tests::cluster_1_rack_single_shotover::case_2_java
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 100 filtered out; finished in 22.94s
--- STDERR: shotover-proxy::lib kafka_int_tests::cluster_1_rack_single_shotover::case_2_java ---
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
auto.include.jmx.reporter = true
bootstrap.controllers = []
bootstrap.servers = [127.0.0.1:9192]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
enable.metrics.push = true
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.max.ms = 1000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 1[31](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:32)072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 2ae524ed6254[38](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:39)c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1718592183108
thread 'kafka_int_tests::cluster_1_rack_single_shotover::case_2_java' panicked at /home/runner/work/shotover-proxy/shotover-proxy/test-helpers/src/connection/kafka/java.rs:[47](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:48)5:58:
called `Result::unwrap()` on an `Err` value: JavaError("org.apache.kafka.common.errors.UnknownTopicOrPartitionException: \n")
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::result::unwrap_failed
3: lib::kafka_int_tests::test_cases::admin_setup::{{closure}}
4: lib::kafka_int_tests::test_cases::standard_test_suite::{{closure}}
5: lib::kafka_int_tests::cluster_1_rack_single_shotover::{{closure}}
6: lib::kafka_int_tests::cluster_1_rack_single_shotover::case_2_java::{{closure}}
7: tokio::runtime::runtime::Runtime::block_on
8: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node 0 (/127.0.0.1:9192) could not be established. Node may not be available.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node 0 (/127.0.0.1:[91](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:92)92) could not be established. Node may not be available.
Intermittent test failures are now occurring pretty commonly, time to do some investigation and flush them out.
cassandra_int_tests::cluster_single_rack_v4::case_2_datastax
Oh oops, I wasnt thinking and overwrote this with a manually induced test failure.
## redis_int_tests::multiThis should be fixed by: https://github.com/shotover/shotover-proxy/pull/1080