When looking into the integration test output, I noticed monitoring-related errors and thought that we don't need monitoring when running integration tests in the first place. On looking into the monitoring subsystem, I noticed that we do not create the AdvancedMonitor if monitoring is not enabled, but we still initialized metrics etc. I don't think this is necessary if monitoring is not enabled, so this PR changes that.
Also, the sporadically failing tests typically time out on creating a client (in particular, joining a client to the host). I added a new integration test that simply tests that client creation does not time out. Unfortunately, I was not able to reproduce the timeout situation locally. Still I think that this test might be helpful.
Finally, I noticed that in the example test testSendEvent we're sending a ResetCameraEvent which I thought to be weirdly specific and potentially introducing issues. If the goal of a test is to only test that sending an event doesn't fail, we should use a minimal dummy event and test specific events in a dedicated fashion. This is why, I changed the test to send DummyEvent instead.
TL;DR This PR contains
initializing metrics only if monitoring is enabled
adding client creation test to ensure clients can be created and connect to hosts without experiencing a timeout
sending DummyEvent instead of ResetCameraEvent to avoid unintended side effects and delays that might falsify the intended test result
I'm not at all sure or convinced that these changes fix the sporadics, however they might add to making them more reliable.
When looking into the integration test output, I noticed monitoring-related errors and thought that we don't need monitoring when running integration tests in the first place. On looking into the monitoring subsystem, I noticed that we do not create the
AdvancedMonitor
if monitoring is not enabled, but we still initialized metrics etc. I don't think this is necessary if monitoring is not enabled, so this PR changes that.Also, the sporadically failing tests typically time out on creating a client (in particular, joining a client to the host). I added a new integration test that simply tests that client creation does not time out. Unfortunately, I was not able to reproduce the timeout situation locally. Still I think that this test might be helpful.
Finally, I noticed that in the example test
testSendEvent
we're sending aResetCameraEvent
which I thought to be weirdly specific and potentially introducing issues. If the goal of a test is to only test that sending an event doesn't fail, we should use a minimal dummy event and test specific events in a dedicated fashion. This is why, I changed the test to sendDummyEvent
instead.TL;DR This PR contains
DummyEvent
instead ofResetCameraEvent
to avoid unintended side effects and delays that might falsify the intended test resultI'm not at all sure or convinced that these changes fix the sporadics, however they might add to making them more reliable.