rust-embedded / embedded-hal

A Hardware Abstraction Layer (HAL) for embedded systems
Apache License 2.0
2.01k stars 202 forks source link

`Delay` trait using `core::time::Duration` #523

Open madsmtm opened 12 months ago

madsmtm commented 12 months ago

Is there a reason that the embedded_hal::delay::DelayUs API is not defined as follows:

use core::time::Duration;

pub trait Delay {
    fn delay(&mut self, duration: Duration);

    #[inline]
    fn delay_us(&mut self, us: u32) {
        self.delay(Duration::from_micros(us))
    }

    #[inline]
    fn delay_ms(&mut self, ms: u32) {
        self.delay(Duration::from_millis(us))
    }
}

impl<T: Delay> Delay for &mut T { ... }

I haven't actually implemented this API myself, but it seems like this would be both much cleaner, and much more flexible (e.g. allowing nanosecond resolution as well)?

Is the issue that core::time::Duration is too large to pass around?

madsmtm commented 12 months ago

Another oddity: DelayUs from embedded-hal has a default implementation for delay_ms, while the same trait from embedded-hal-async does not have this default implementation.

ryankurte commented 12 months ago

Is the issue that core::time::Duration is too large to pass around?

iirc this is because core::time::Duration is 64-bit and we didn't want to impose that on (what are predominantly) 32-bit targets

madsmtm commented 12 months ago

Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?

That way, people on 32-bit targets could still use the usual delay_ms/delay_us methods if they're concerned about 64-bit numbers (implementers could provide efficient versions for each of these), while most people could use the usual delay method (which after inlining would likely compile down to the same thing, if we assume the common case of a constant duration).

adamgreig commented 11 months ago

Another oddity: DelayUs from embedded-hal has a default implementation for delay_ms, while the same trait from embedded-hal-async does not have this default implementation.

Thanks for pointing this out! This was due to a limitation in async functions in traits when it was added, but it's now possible so that's been fixed (see #535).

Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?

The problem is that the end users (who've picked a target) aren't the ones calling delay: it's driver authors, writing platform-agnostic code, that call it from some generic Delay provider. The end user can't choose which delay a driver called, so they will likely end up with delays based on core's Duration -- which is 12 bytes in the end (u64 seconds + u32 nanoseconds), so the computation and memory overhead could be significant.

We have however just changed (in #537) the trait to permit nanosecond resolution, although many platforms likely will round up significantly (e.g. probably to the next highest µs).

The hope is that one day we can invent a suitable Duration type that everyone is happy with, even on restricted 8/16-bit platforms, and in that case we'll introduce a new Delay trait that uses it and in the meantime we have DelayNs which provides for delay_ms/delay_us/delay_ns. There is also a possibility of calling the current trait Delay (#541) and either giving up on finding a Duration, or if we do, just adding it as a new method to Delay.