However when playing around with other seeds and data lengths, I came across the following prefix that makes the current test fail for example:
prefix = b'~\xa3'
Is the parser actually supposed to be robust against random data already being present in its buffer? If yes, I'm not 100% sure how to properly test that (since random data could also be valid or at least look valid enough to cause a crash such as this one). If it isn't, it might be better to have an artificial static string there that never will cause issues (such as right now, as the "random" data is actually fixed in place via a seed value) than making it seem like there is some randomness being used in the tests. Or maybe this test is not actually necessary.
While working on writing tests and fixtures for this project, I stumbled upon
test_extract_hdlc_data_with_random_prefix
here:https://github.com/scs/smartmeter-datacollector/blob/d4613f504b7b0e1bb63af14c67f200402a6db744/tests/test_hdlc_dlms_parser.py#L41-L50
However when playing around with other seeds and data lengths, I came across the following prefix that makes the current test fail for example:
prefix = b'~\xa3'
Is the parser actually supposed to be robust against random data already being present in its buffer? If yes, I'm not 100% sure how to properly test that (since random data could also be valid or at least look valid enough to cause a crash such as this one). If it isn't, it might be better to have an artificial static string there that never will cause issues (such as right now, as the "random" data is actually fixed in place via a seed value) than making it seem like there is some randomness being used in the tests. Or maybe this test is not actually necessary.