michaelbeaumont / dht-sensor

Rust embedded-hal based driver for the DHT11/DHT22 sensor
MIT License
26 stars 16 forks source link

+Working async using `embedded-hal-async`+ (See last comment!) #17

Closed TheLostLambda closed 1 year ago

TheLostLambda commented 1 year ago

It's a bit messy still because I need some testing and input from @michaelbeaumont , but it works beautifully on my Pi Pico using Embassy!

Here is the example code that works on my Pico:

#![no_std]
#![no_main]
#![feature(type_alias_impl_trait)]

use defmt::*;
use dht_sensor::*;
use embassy_executor::Spawner;
use embassy_rp::gpio::{Level, OutputOpenDrain};
use embassy_time::{Delay, Duration, Timer};
use {defmt_rtt as _, panic_probe as _};

#[embassy_executor::main]
async fn main(_spawner: Spawner) {
    // Initialise Peripherals
    let p = embassy_rp::init(Default::default());

    // Pull the pin high for initialisation
    let mut sensor_pin = OutputOpenDrain::new(p.PIN_12, Level::High);

    // Wait for the sensor to initialise...
    info!("Waiting on the sensor...");
    Timer::after(Duration::from_secs(1)).await;

    // Loop
    loop {
        match dht22::read(&mut Delay, &mut sensor_pin).await {
            Ok(dht22::Reading {
                temperature,
                relative_humidity,
            }) => info!("{}°C, {}% RH", temperature, relative_humidity),
            Err(e) => match e {
                DhtError::PinError => error!("Pin Error!"),
                DhtError::ChecksumMismatch => error!("Checksum Mismatch!"),
                DhtError::Timeout => error!("Timeout!"),
            },
        }

        // Wait 1s
        Timer::after(Duration::from_secs(1)).await;
    }
}

Pretty much identical to how I was using the sync version before!

Now I suppose I need your help to:

  1. Test the stm32f042 examples
  2. Let me know if you're alright getting rid of the <E> generic on the Error type — it was a source of frustration when implementing the await_with_timeout() function and all of my error types were Infallible anyways so I removed that bit of information? It's nicer working without the generics everywhere, but it does lose a bit of information and I'm not thrilled with my stop-gap in the sync read.rs file... If you want that back, I'll add that back and add in some turbofish annotations that I think should fix the issues I ran into in read_async.rs
  3. Let me know if you're okay replacing InputOutputPin<E> with just InputPin + OutputPin — kinda tied to point 2 I suppose!
  4. If possible, those dev-dependencies should probably be updated to use things from crates.io — currently any cargo commands that build the examples fail because they can't resolve those local paths!
  5. Finally, since the dependency tree from adding in async is relatively insignificant (see cargo tree below), how would you feel about splitting the library into blocking and async modules and getting rid of the feature flag? Might help to disentangle things a bit, let's users use both APIs at once (if they ever need to) and shouldn't affect the compile-time much at all!

image

Thanks for the lovely library! Now I just need your help to polish up what's already here!

TheLostLambda commented 1 year ago

Hmm, sad news... I've err, tried things with a debug build and it seems that (with Embassy as the executor at least), it struggles to keep up with the timing-sensitive protocol?

I'll investigate further (as it looks like Embassy being particularly slow here), but I'd suggest testing with --release if you have any issues!

TheLostLambda commented 1 year ago

Double update, this isn't particularly unique to my async version — if I edit the sync version to output a signal on a notify pin (so I can look at latencies with my oscilloscope), it also times out every time...

Might just be something that's best in release mode regardless!

TheLostLambda commented 1 year ago

Last few updates! Overloading Embassy with busy tasks means that every .await encountered in this async driver can be interrupted and bits can be missed, leading to checksum errors!

Currently the whole read takes 22ms, with 18ms of those being a blocking wait! As a side-note (maybe for another PR), the DHT22 only needs a 1ms delay there! That brings the total read time down to 5ms and is definitely something to be implemented here either way!

Additionally, though this would matter most for the DHT11 with that longer initial pull-down, how would you feel about an async version that uses an async wait there (before data transmission), but uses blocking waits for all of the microsecond delays when receiving the data? That way actual data reading can't be interrupted by other tasks (just real interrupts, I suppose).

I don't know if that's the best-practices solution, but a driver that relies on sensitive timing handing over control to literally any task with no guarantee it will hand back control soon (let alone in the next few microseconds) seems a bit silly.

I've had a blast coding all of this up, but I'm wondering if we wouldn't be better off addressing things with just two smaller PRs — one adding async to that initial 18ms wait, and the other making that 18µs into 1µs for the DHT22.

Sorry for that but let me know what you think!

TheLostLambda commented 1 year ago

Closing in favor of #18! (But maybe still some interesting information here)