Open madsmtm opened 12 months ago
Another oddity: DelayUs
from embedded-hal
has a default implementation for delay_ms
, while the same trait from embedded-hal-async
does not have this default implementation.
Is the issue that core::time::Duration is too large to pass around?
iirc this is because core::time::Duration
is 64-bit and we didn't want to impose that on (what are predominantly) 32-bit targets
Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?
That way, people on 32-bit targets could still use the usual delay_ms
/delay_us
methods if they're concerned about 64-bit numbers (implementers could provide efficient versions for each of these), while most people could use the usual delay
method (which after inlining would likely compile down to the same thing, if we assume the common case of a constant duration).
Another oddity: DelayUs from embedded-hal has a default implementation for delay_ms, while the same trait from embedded-hal-async does not have this default implementation.
Thanks for pointing this out! This was due to a limitation in async functions in traits when it was added, but it's now possible so that's been fixed (see #535).
Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?
The problem is that the end users (who've picked a target) aren't the ones calling delay
: it's driver authors, writing platform-agnostic code, that call it from some generic Delay
provider. The end user can't choose which delay a driver called, so they will likely end up with delays based on core's Duration
-- which is 12 bytes in the end (u64 seconds + u32 nanoseconds), so the computation and memory overhead could be significant.
We have however just changed (in #537) the trait to permit nanosecond resolution, although many platforms likely will round up significantly (e.g. probably to the next highest µs).
The hope is that one day we can invent a suitable Duration
type that everyone is happy with, even on restricted 8/16-bit platforms, and in that case we'll introduce a new Delay
trait that uses it and in the meantime we have DelayNs
which provides for delay_ms/delay_us/delay_ns. There is also a possibility of calling the current trait Delay
(#541) and either giving up on finding a Duration
, or if we do, just adding it as a new method to Delay
.
Is there a reason that the
embedded_hal::delay::DelayUs
API is not defined as follows:I haven't actually implemented this API myself, but it seems like this would be both much cleaner, and much more flexible (e.g. allowing nanosecond resolution as well)?
Is the issue that
core::time::Duration
is too large to pass around?