Right now, the duration of a switch delay depends entirely on the delay that the user spends holding the button.
However, this has a few disadvantages:
if a switch triggers a breakpoint, the holding state is lost - this makes it hard to debug things related to buttons
more importantly, big circuits have a slow simulation speed (as in simulated ns per actual second). This means that on slow simulation speeds, it's possible that a switch held by the user for 500 ms is only held for 20 ns in the circuit. This can be too short, e.g. if the circuit contains a debounce logic that works time-based (by the way, should we have a bouncy switch to learn such things?)
some situations can be improved by simply choosing a high propagation delay. But with a high propagation delay, it can take several seconds (imagine the circuit runs at 1000 µs per second) for the switch to appear active, which makes the GUI experience a little strange.
For these reasons, I propose to add a new property to switches: A minimum on-time. If that is set and the mouse is released before this duration has passed, the switch will remain pressed until that time is over.
Right now, the duration of a switch delay depends entirely on the delay that the user spends holding the button.
However, this has a few disadvantages:
For these reasons, I propose to add a new property to switches: A minimum on-time. If that is set and the mouse is released before this duration has passed, the switch will remain pressed until that time is over.
PR for this is on the way.