embassy-rs / nrf-softdevice

Apache License 2.0
256 stars 76 forks source link

GATT Client support for receiving notifications #195

Closed Univa closed 9 months ago

Univa commented 9 months ago

Closes #102.

Largely based off of the initial instructions by @sureshjoshi here: https://github.com/embassy-rs/nrf-softdevice/issues/102#issuecomment-1620341184

Wrote it in a similar fashion to how the BLE_GATTS_EVTS_BLE_GATTS_EVT_WRITE event is handled in the GATT server, and also updated the gatt_client macro to handle notifications.

Note: no extra method was added to do a CCCD write, so enabling notifications requires calling gatt_client::write.

sureshjoshi commented 9 months ago

Pretty stoked to see this here!

Do you think it's worth adding/updating simple example as well, to show usage?

Univa commented 9 months ago

Do you think it's worth adding/updating simple example as well, to show usage?

I just pushed an update to the ble_bas_central example since it already had notifications enabled on the battery level characteristic.

Apart from that, I also added:

alexmoon commented 9 months ago

Looks good to me. Thanks!

peter9477 commented 9 months ago

Note that this doesn't address a potential panic with the original approach to discover_service(), where any unexpected event (other than the "discovery done" or timeout) causes a panic. It's possible for other events to arrive at that time, for example if discover_service() is called immediately after a connection which is still dealing with connection setup stuff like MTU exchange and encryption. (I can observe this reliably if I connect to my peripheral using Android nRFConnect while it has the simulated ANCS service enabled.)

I hadn't gotten around to implementing the client notification handling myself (so thank you @Univa for your work!) but I had figured when I did I would have to restructure these routines, perhaps so that there would be a single call to gatt_client::run() similar to how gatt_server::run() is used, and then it would be capable of handling all required messages in one place, but without needing a panic() for anything unexpected. (Another approach that may work is simply to take out the panic, replacing it with a {} to just ignore unexpected events, but I haven't tried analyzing whether that would overlook anything important. You'd then have to call discover_service(), enable your CCCD, then call run(), but I suspect that will be awkward if you have multiple services you want to discover, and multiple CCCDs, and I could easily see it ending up missing critical events if it's not all handled in one place within a single run() call.

sureshjoshi commented 8 months ago

@Univa Have you had a situation with notifications from multiple services/characteristics?

In the current setup, if I use a single connection with multiple macro'd clients, then there is an error with multiple tasks waiting on the same portal.

So, the handle for a client probably shouldn't be the connection object, unless we can artificially build out a clone, but one that doesn't interfere with the underlying connection mechanics (which seems sketchy to me).

I think the idea above with consolidating everything per client into a single run might be safer.

Alternatively - the portals would need to be setup taking into account the characteristic handle and connection handle.

As a workaround, I'm experimenting with making a synthetic Client which receives the commands and passes them along. Not a very elegant solution, but might be useful

Here's what that kinda looks like:


struct MyClient {
    client1: Client1,
    client2: Client2,
}

enum MyClientEvent {
    Event1(Client1Event),
    Event2(Client2Event),
}

impl Client for MyClient {
    type Event = MyClientEvent;

    fn on_hvx(
        &self,
        conn: &nrf_softdevice::ble::Connection,
        type_: gatt_client::HvxType,
        handle: u16,
        data: &[u8],
    ) -> Option<Self::Event> {
        // Return the first non-None event

        self.client1
            .on_hvx(conn, type_, handle, data)
            .map(MyClientEvent::Event1)
            .or_else(|| {
                self.client2
                    .on_hvx(conn, type_, handle, data)
                    .map(MyClientEvent::Event2)
            })
    }

...

 let client = MyClient {
        client1: client1,
        client2: client2,
    };

    gatt_client::run(&connection, &client, |event| match event {
        MyClientEvent::Event1(event1) => match event1 {
            Client1Event::MyNotification(val) => {
               todo!()
            }
        },
        MyClientEvent::Event2(event2) => match event2 {
            Client2Event::MyNotification(val) => {
               todo!()
            }
        },
    })

As you can see, not very scalable, but good in a pinch.

I think either re-doing the macros to better handle multiple clients + multiple services/characteristics is a better strategy, or at least using characteristic handles in the portal calls.

Univa commented 8 months ago

@sureshjoshi I did encounter that issue with multiple tasks waiting on CONNECT_PORTAL, but I just used a mutex in combination with embassy_futures::select to connect to multiple peripherals. The mutex makes sure the clients connect one at a time, only moving on to the next if a connection was successfully established. This avoids the portal issue, but since the connections happen one at a time, a failure to connect to one peripheral will hold up other peripherals from being connected. Definitely not an ideal solution, but it was good enough for me. I haven't yet encountered the issues that @peter9477 mentioned, but personally I wouldn't mind if run() was changed to handle the client events in one place.

sureshjoshi commented 8 months ago

@Univa That's for multiple peripherals though, right? My question was about multiple services in a single peripheral. Since they all wait on the same connection object

Univa commented 8 months ago

My question was about multiple services in a single peripheral.

Ah my bad, I see what you're saying. Yeah, my approach was for one service, multiple peripherals. I haven't yet needed multiple services since all the characteristics I need are on one service, so unfortunately I don't really have any other ideas at the moment. I think I like your idea of changing the proc macros somehow to better support that use case, or just changing run(). Although I think changing run() to support this could be more complicated. For the portals, I don't really see a straightforward way of changing them to take the characteristics handles into account, nor do I think its necessary for this use case. If we take a proc macro approach to this, I could see an example where this code today:

#[nrf_softdevice::gatt_client(uuid = "180f")]
struct BatteryServiceClient {
    #[characteristic(uuid = "2a19", read, write, notify)]
    battery_level: u8,
}

// ...

client.battery_level_cccd_write(true).await.unwrap();

gatt_client::run(&conn, &client, |event| match event {
    BatteryServiceClientEvent::BatteryLevelNotification(val) => {
        info!("battery level notification: {}", val);
    }
})
.await;

...could become:

#[nrf_softdevice::gatt_service_client(uuid = "180f")]
struct BatteryServiceClient {
    #[characteristic(uuid = "2a19", read, write, notify)]
    battery_level: u8,
}

#[nrf_softdevice::gatt_client]
struct MyClient {
    battery_service: BatteryServiceClient,
}

// ...

client.battery_service.battery_level_cccd_write(true).await.unwrap();

gatt_client::run(&conn, &client, |event| match event {
    MyClientEvent::BatteryService(event) => match event {
        BatteryServiceClientEvent::BatteryLevelNotification(val) => {
            info!("battery level notification: {}", val);
        }
    }
})
.await;

Under the hood, it could work similarly in fashion to how you did your workaround. So basically a macro that gets applied to a struct to provide the impl block which calls on_hvx for each service. I imagine that this would require another trait apart from the existing gatt_client::Client trait.