Verifying a match loop result is potentially expensive (it runs the Ruby block containing the condition to be matched) and so I didn't want to poll on every simulation cycle while waiting for a match.
The original implementation assumed that the timeout value supplied by the user would be in the ballpark of what time it should take, and so the poll frequency was set to expected_time (i.e. timeout) / 10.
However, I've since realized that users who don't know what the time should be (myself included), can put in large timeout values like 1s and then be puzzled over why the match loop blocks the simulation unnecessarily for 100ms if the match condition has become true straight away.
This change puts in a new default poll time of 100us, and if the old calculation resulted in a poll of < 100us, then that will be used instead.
Users can still specify the poll time directly if they find that the default is not fine grained enough or vice versa.
Verifying a match loop result is potentially expensive (it runs the Ruby block containing the condition to be matched) and so I didn't want to poll on every simulation cycle while waiting for a match.
The original implementation assumed that the timeout value supplied by the user would be in the ballpark of what time it should take, and so the poll frequency was set to expected_time (i.e. timeout) / 10.
However, I've since realized that users who don't know what the time should be (myself included), can put in large timeout values like 1s and then be puzzled over why the match loop blocks the simulation unnecessarily for 100ms if the match condition has become true straight away.
This change puts in a new default poll time of 100us, and if the old calculation resulted in a poll of < 100us, then that will be used instead. Users can still specify the poll time directly if they find that the default is not fine grained enough or vice versa.