Open AlJohri opened 1 month ago
Looking at my logs further, I think I missed the actual error?
[tracing-subscriber] Unable to write an event to the Writer for this Subscriber! Error: other error
So we are trying to look for something in the logs, and it doesn't appear to be there because it was unable to write an event after adding the tokio_unstable flag.
Hm, I don't really know where this is looking from. The traceback doesn't even contain a mention of tracing_test
, but that is to be expected since it's a macro which is expanded inside the calling crate.
Ah, I think now I understood your second comment. Yeah, I think you're right, the logs_contain
function seems to be working as expected. The source of the panic is that the expected log actually isn't there.
The subscriber that tracing-test is using, is just a standard FmtSubscriber
:
So I'm not really sure what could be causing this. But it's probably coming from tracing_subscriber
?
Thanks! So I think I see where the error is happening, but I'm not quite sure on why its happening yet. The error is related to the mock_writer
used in that above code snippet.
Looking at the full error message ([tracing-subscriber] Unable to write an event to the Writer for this Subscriber! Error: other error
), the last part (other error
), gives us a clue that this is the line causing the error:
We can see in the rust source that io::ErrorKind::Other
maps to the string "other error" here.
So for whatever reason, we are erroring out while waiting on the lock.
Reading the docs for std::sync::Mutex::lock it says:
Errors
If another user of this mutex panicked while holding the mutex, then this call will return an error once the mutex is acquired.
Panics
This function might panic when called if the lock is already held by the current thread.
I am able to isolate that this is not related to my test runner (cargo nextest) and I am able to replicate the error when just running a single test:
cargo test test_one_search_returns_500
running 1 test
test search::tests::test_one_search_returns_500 ... FAILED
I even see that the string that it is supposed to be looking for, "invalid return status code: 500 Internal Server Error", is present in the logs:
2024-06-03T01:53:16.354309Z ERROR runtime.spawn{kind=block_on task.name= task.id=1 loc.file="amzn-eureka-broker-node/src/search.rs" loc.line=1558 loc.col=11}:background_work:shard_search_req: amzn_eureka_broker_node::search: error=invalid return status code: 500 Internal Server Error
...
2024-06-03T01:53:16.354665Z ERROR runtime.spawn{kind=block_on task.name= task.id=1 loc.file="amzn-eureka-broker-node/src/search.rs" loc.line=1558 loc.col=11}: amzn_eureka_logging_and_tracing::metrics::reporters: shard_search_req_fail reported due to:invalid return status code: 500 Internal Server Error
...
...
thread 'search::tests::test_one_search_returns_500' panicked at amzn-eureka-broker-node/src/search.rs:1556:9:
assertion failed: logs_contain("invalid return status code: 500 Internal Server Error")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
However, when I run this single test, I don't see the "Unable to write an event to the Writer for this Subscriber..." error any longer 🤔
Some progress!
I found that if I remove the no-env-filter
feature, only 1 out of my 4 total tests that rely on logs_contain / logs_assert are failing. The only 1 that is failing now is an "integration test" because it lives outside of the module.
Here are the three scenarios I tested:
no-env-filter
: all tests passno-env-filter
+ tokio_unstable
: all 4 tests failtokio_unstable
: only 1 test naturally fails which is an integration testSo I believe there is some interaction between no-env-filter
+ tokio_unstable
that makes it so tracing_test's logs_contain
and logs_assert
macros can no longer see the logs even when they are being emitted and I can see them in the terminal.
I just enabled
tokio_unstable
in my application and I see tests that make use oflogs_contain
are now panicking.Here is the line at which the code panicked in my code:
And here is the full backtrace from the panic:
I am trying to enable
tokio_unstable
for use withtokio-console
.Has anyone seen this issue before?